Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 
 

Here’s how Google made VR history and got its first Oscar nom — from inverse.com by Victor Fuste
Google’s short film ‘Pearl’ marks a major moment in VR history. 

Excerpt:

The team at Google Spotlight Stories made history on Wednesday, as its short film Pearl became the first virtual reality project to be nominated for an Academy Award. But instead of serving as a capstone, the Oscar nod is just a nice moment at the beginning of the Spotlight team’s plan for the future of storytelling in the digital age.

Google Spotlight Stories are not exactly short films. Rather, they are interactive experiences created by the technical pioneers at Google’s Advanced Technologies and Projects (ATAP) division, and they defy expectations and conventions. Film production has in many ways been perfected, but for each Spotlight Story, the technical staff at Google uncovers new challenges to telling stories in a medium that blends together film, mobile phones, games, and virtual reality. Needless to say, it’s been an interesting road.

 

 

A world without work — by Derek Thompson; The Atlantic — from July 2015

Excerpts:

Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.

For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War  II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.

Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.

“Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed”…

“The cultural breakdown matters even more than the economic breakdown.”

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen.

What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both”…

Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury.

Most people do need to achieve things through, yes, work to feel a lasting sense of purpose.

When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit.

What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.

“I can’t stress this enough: this isn’t just about economics; it’s psychological”…

 

 

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

 

 

From DSC:
Though I’m not saying Thompson is necessarily asserting this in his article, I don’t see a world without work as a dream. In fact, as the quote immediately before this paragraph alludes to, I think that most people would not like a life that is devoid of all work. I think work is where we can serve others, find purpose and meaning for our lives, seek to be instruments of making the world a better place, and attempt to design/create something that’s excellent.  We may miss the mark often (I know I do), but we keep trying.

 

 

 

“The world’s first smart #AugmentedReality for the Connected Home has arrived.  — from thunderclap.it

From DSC:
Note this new type of Human Computer Interaction (HCI). I think that we’ll likely be seeing much more of this sort of thing.

 

Excerpt (emphasis DSC):

How is Hayo different?
AR that connects the magical and the functional:

Unlike most AR integrations, Hayo removes the screens from smarthome use and transforms the objects and spaces around you into a set of virtual remote controls. Hayo empowers you to create experiences that have previously been limited by the technology, but now are only limited by your imagination.

Screenless IoT:
The best interface is no interface at all. Aside from the one-time setup Hayo does not use any screens. Your real-life surfaces become the interface and you, the user, become the controls. Virtual remote controls can be placed wherever you want for whatever you need by simply using your Hayo device to take a 3D scan of your space.

Smarter AR experience:
Hayo anticipates your unique context, passive motion and gestures to create useful and more unique controls for the connected home. The Hayo system learns your behaviors and uses its AI to help meet your needs.

 

 

 

 

Also see:

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

Apple iPhone 8 To Get 3D-Sensing Tech For Augmented-Reality Apps — from investors.com by Patrick Seitz

Excerpt:

Apple’s (AAPL) upcoming iPhone 8 smartphone will include a 3D-sensing module to enable augmented-reality applications, Rosenblatt Securities analyst Jun Zhang said Wednesday. Apple has included the 3D-sensing module in all three current prototypes of the iPhone 8, which have screen sizes of 4.7, 5.1 and 5.5 inches, he said. “We believe Apple’s 3D sensing might provide a better user experience with more applications,” Zhang said in a research report. “So far, we think 3D sensing aims to provide an improved smartphone experience with a VR/AR environment.”

Apple's iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)Apple’s iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)

 

 

AltspaceVR Education Overview

 

 

 

 

10 Prominent Developers Detail Their 2017 Predictions for The VR/AR Industry — from uploadvr.com by David Jagneaux

Excerpt:

As we look forward to 2017 then, we’ve reached out to a bunch of industry experts and insiders to get their views on where we’re headed over the next 12 months.

2016 provided hints of where Facebook, HTC, Sony, Google, and more will take their headsets in the near future, but where does the industry’s best and brightest think we’ll end up this time next year? With CES, the year’s first major event, now in the books, let’s hear from some those that work with VR itself about what happens next.

We asked all of these developers the same four questions:

1) What do you think will happen to the VR/AR market in 2017?
2) What NEEDS to happen to the VR AR market in 2017?
3) What will be the big breakthroughs and innovations of 2017?
4) Will 2017 finally be the “year of VR?”

 

 

MEL Lab’s Virtual Reality Chemistry Class — from thereisonlyr.com by Grant Greene
An immersive learning startup brings novel experiences to science education.

 

 

The MEL app turned my iPhone 6 into a virtual microscope, letting me walk through 360 degree, 3-D representations of the molecules featured in the experiment kits.

 

 

 

 

Labster releases ‘World of Science’ Simulation on Google Daydream — from labster.com by Marian Reed

Excerpt:

Labster is exploring new platforms by which students can access its laboratory simulations and is pleased to announce the release of its first Google Daydream-compatible virtual reality (VR) simulation, ‘Labster: World of Science’. This new simulation, modeled on Labster’s original ‘Lab Safety’ virtual lab, continues to incorporate scientific learning alongside of a specific context, enriched by story-telling elements. The use of the Google VR platform has enabled Labster to fully immerse the student, or science enthusiast, in a wet lab that can easily be navigated with intuitive usage of Daydream’s handheld controller.

 

 

The Inside Story of Google’s Daydream, Where VR Feels Like Home — from wired.com by David Pierce

Excerpt:

Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things.

 

 

Welcoming more devices to the Daydream-ready family — from blog.google.com by Amit Singh

Excerpt:

In November, we launched Daydream with the goal of bringing high quality, mobile VR to everyone. With the Daydream View headset and controller, and a Daydream-ready phone like the Pixel or Moto Z, you can explore new worlds, kick back in your personal VR cinema and play games that put you in the center of the action.

Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics, and high-fidelity sensors for precise head tracking. To give you even more choices to enjoy Daydream, today we’re welcoming new devices that will soon join the Daydream-ready family.

 

 

Kessler Foundation awards virtual reality job interview program — from haptic.al by Deniz Ergürel

Excerpt:

Kessler Foundation, one of the largest public charities in the United States, is awarding a virtual reality training project to support high school students with disabilities. The foundation is providing a two-year, $485,000 Signature Employment Grant to the University of Michigan in Ann Arbor, to launch the Virtual Reality Job Interview Training program. Kessler Foundation says, the VR program will allow for highly personalized role-play, with precise feedback and coaching that may be repeated as often as desired without fear or embarrassment.

 

 

Deep-water safety training goes virtual — from shell.com by Soh Chin Ong
How a visit to a shopping centre led to the use of virtual reality safety training for a new oil production project, Malikai, in the deep waters off Sabah in Malaysia.

 

 

 

ISNS students embrace learning in a world of virtual reality — from by

Excerpt (emphasis DSC):

To give students the skills needed to thrive in an ever more tech-centred world, the International School of Nanshan Shenzhen (ISNS) is one of the world’s first educational facilities now making instruction in virtual reality (VR) and related tools a key part of the curriculum.

Building on a successful pilot programme last summer in Virtual Reality, 3D art and animation, the intention is to let students in various age groups experiment with the latest emerging technologies, while at the same time unleashing their creativity, curiosity and passion for learning.

To this end, the school has set up a special VR innovation lab, conceived as a space for exploration, design and interdisciplinary collaboration involving a number of different subject teachers.

Using relevant software and materials, students learn to create high-quality digital content and to design “experiences” for VR platforms. In this “VR Lab makerspace” – a place offering the necessary tools, resources and support – they get to apply concepts and theories learned in the classroom, develop practical skills, document their progress, and share what they have learned with classmates and other members of the tech education community. 

 

 

As a next logical step, she is also looking to develop contacts with a number of the commercial makerspaces which have sprung up in Shenzhen. The hope is that students will then be able to meet engineers working on cutting-edge innovations and understand the latest developments in software, manufacturing, and areas such as laser cutting, and 3D printing, and rapid prototyping.  

 

 

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 

GE’s Sam Murley scopes out the state of AR and what’s next — from thearea.org

Excerpt (emphasis DSC):

AREA: How would you describe the opportunity for Augmented Reality in 2017?
SAM MURLEY: I think it’s huge — almost unprecedented — and I believe the tipping point will happen sometime this year. This tipping point has been primed over the past 12 to 18 months with large investments in new startups, successful pilots in the enterprise, and increasing business opportunities for providers and integrators of Augmented Reality. During this time, we have witnessed examples of proven implementations – small scale pilots, larger scale pilots, and companies rolling out AR in production — and we should expect this to continue to increase in 2017. You can also expect to see continued growth of assisted reality devices, scalable for industrial use cases such as manufacturing, industrial, and services industries as well as new adoption of mixed reality and augmented reality devices, spatially-aware and consumer focused for automotive, consumer, retail, gaming, and education use cases. We’ll see new software providers emerge, existing companies taking the lead, key improvements in smart eyewear optics and usability, and a few strategic partnerships will probably form.

AREA: Do you have visibility into all the different AR pilots or programs that are going on at GE?
SAM MURLEY:

At the 2016 GE Minds + Machines conference, our Vice President of GE Software Research, Colin Parris, showed off how the Microsoft HoloLens could help the company “talk” to machines and service malfunctioning equipment. It was a perfect example of how Augmented Reality will change the future of work, giving our customers the ability to talk directly to a Digital Twin — a virtual model of that physical asset — and ask it questions about recent performance, anomalies, potential issues and receive answers back using natural language. We will see Digital Twins of many assets, from jet engines to or compressors. Digital Twins are powerful – they allow tweaking and changing aspects of your asset in order to see how it will perform, prior to deploying in the field. GE’s Predix, the operating system for the industrial Internet, makes this cutting-edge methodology possible. “What you saw was an example of the human mind working with the mind of a machine,” said Parris. With Augmented Reality, we are able to empower the workforce with tools that increase productivity, reduce downtime, and tap into the Digital Thread and Predix. With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

 

 

From DSC:
I also believe that the tipping point will happen sometime this year.  I hadn’t heard of the concept of a Digital Twin — but I sense that we’ll be hearing that more often in the future.

 

 

 

With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

 

 

 


From DSC:
I then saw the concept of the “Digital Twin” again out at:

  • Breaking through the screen — from medium.com by Evan Helda
    Excerpt (emphasis DSC ):
    Within the world of the enterprise, this concept of a simultaneous existence of “things” virtually and physically has been around for a while. It is known as the “digital twin”, or sometimes referred to as the “digital tapestry” (will cover this topic in a later post). Well, thanks to the internet and ubiquity of sensors today, almost every “thing” now has a “digital twin”, if you will. These “things” will embody this co-existence, existing in a sense virtually and physically, and all connected in a myriad of ways. The outcome at maturity is something we’ve yet to fully comprehend.

 

 

 

Chatbots: The next big thing — from dw.com
Excerpt:

More and more European developers are discovering the potential of chatbots. These mini-programs interact automatically with users and could be particularly useful in areas like online shopping and news delivery. The potential of chatbots is diverse. These tiny programs can do everything from recognizing customers’ tastes to relaying the latest weather forecast. Berlin start-up Spectrm is currently devising bots that deliver customized news. Users can contact the bot via Facebook Messenger, and receive updates on topics that interest them within just a few seconds.

 

 

MyPrivateTutor releases chatbot for finding tutors — from digitaljournal.com
MyPrivateTutor, based in Kolkata, matches tutors to students using proprietary machine learning algorithms

Excerpt:

Using artificial intelligence, the chatbot helps us reach a wider segment of users who are still not comfortable navigating websites and apps but are quite savvy with messaging apps”, said Sandip Kar, co-founder & CEO of MyPrivateTutor (www.myprivatetutor.com), an online marketplace for tutors, has released a chatbot for helping students and parents find tutors, trainers, coaching classes and training institutes near them.

 

 

Story idea: Covering the world of chatbots — from businessjournalism.org by Susan Johnston Taylor

Excerpt:

Chatbots, computer programs designed to converse with humans, can perform all sorts of activities. They can help users book a vacation, order a pizza, negotiate with Comcast or even communicate with POTUS. Instead of calling or emailing a representative at the company, consumers chat with a robot that uses artificial intelligence to simulate natural conversation. A growing number of startups and more established companies now use them to interact with users via Facebook Messenger, SMS, chat-specific apps such as Kik or the company’s own site.

To cover this emerging business story, reporters can seek out companies in their area that use chatbots, or find local tech firms that are building them. Local universities may have professors or other experts available who can provide big-picture context, too. (Expertise Finder can help you identify professors and their specific areas of study.)

 

 

How chatbots are addressing summer melt for colleges — from ecampusnews.com

Excerpt:

AdmitHub, an edtech startup which builds conversational artificial intelligence (AI) chatbots to guide students on the path to and through college, has raised $2.95 million in seed funding.

 

 

Why higher education chatbots will take over university mobile apps — from blog.admithub.com by Kirk Daulerio

Excerpt (emphasis DSC):

Chatbots are the new apps and websites combined
Chatbots are simple, easy to use, and present zero friction. They exist on the channels that people are most familiar with like Messenger, Twitter, SMS text message, Kik, and expanding onto other messaging applications. Unlike apps, bots don’t take up space, users don’t have to take time to get familiar with a new user interface, and bots will give you an instant reply. The biggest difference with chatbots compared to apps and websites is that they use language as the main interface. Websites and apps have to be searched and clicked, while bots and people use language, the most natural interface, to communicate and inform.

 

 


From DSC:
I think messaging-based chatbots will definitely continue to grow in usage — in numerous industries, including higher education. But I also think that the human voice — working in conjunction with technologies that provide natural language processing (NLP) capabilities — will play an increasingly larger role in how we interface with our devices. Whether it’s via a typed/textual message or whether it’s via a command or a query relayed by the human voice, working with bots needs to be on our radars. These conversational messaging agents are likely to be around for a while.

 


 

Addendum:

 

 

 

Robots will take jobs, but not as fast as some fear, new report says — from nytimes.com by Steve Lohr

 

Excerpt:

The robots are coming, but the march of automation will displace jobs more gradually than some alarming forecasts suggest.

A measured pace is likely because what is technically possible is only one factor in determining how quickly new technology is adopted, according to a new study by the McKinsey Global Institute. Other crucial ingredients include economics, labor markets, regulations and social attitudes.

The report, which was released Thursday, breaks jobs down by work tasks — more than 2,000 activities across 800 occupations, from stock clerk to company boss. The institute, the research arm of the consulting firm McKinsey & Company, concludes that many tasks can be automated and that most jobs have activities ripe for automation. But the near-term impact, the report says, will be to transform work more than to eliminate jobs.

 

So while further automation is inevitable, McKinsey’s research suggests that it will be a relentless advance rather than an economic tidal wave.

 

 

Harnessing automation for a future that works — from mckinsey.com by James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George, Paul Willmott, and Martin Dewhurst
Automation is happening, and it will bring substantial benefits to businesses and economies worldwide, but it won’t arrive overnight. A new McKinsey Global Institute report finds realizing automation’s full potential requires people and technology to work hand in hand.

Excerpt:

Recent developments in robotics, artificial intelligence, and machine learning have put us on the cusp of a new automation age. Robots and computers can not only perform a range of routine physical work activities better and more cheaply than humans, but they are also increasingly capable of accomplishing activities that include cognitive capabilities once considered too difficult to automate successfully, such as making tacit judgments, sensing emotion, or even driving. Automation will change the daily work activities of everyone, from miners and landscapers to commercial bankers, fashion designers, welders, and CEOs. But how quickly will these automation technologies become a reality in the workplace? And what will their impact be on employment and productivity in the global economy?

The McKinsey Global Institute has been conducting an ongoing research program on automation technologies and their potential effects. A new MGI report, A future that works: Automation, employment, and productivity, highlights several key findings.

 

 



Also related/see:

This Japanese Company Is Replacing Its Staff With Artificial Intelligence — from fortune.com by Kevin Lui

Excerpt:

The year of AI has well and truly begun, it seems. An insurance company in Japan announced that it will lay off more than 30 employees and replace them with an artificial intelligence system.  The technology will be based on IBM’s Watson Explorer, which is described as having “cognitive technology that can think like a human,” reports the Guardian. Japan’s Fukoku Mutual Life Insurance said the new system will take over from its human counterparts by calculating policy payouts. The company said it hopes the AI will be 30% more productive and aims to see investment costs recouped within two years. Fukoku Mutual Life said it expects the $1.73 million smart system—which costs around $129,000 each year to maintain—to save the company about $1.21 million each year. The 34 staff members will officially be replaced in March.

 


Also from “The Internet of Everything” report in 2016 by BI Intelligence:

 

 


 

A Darker Theme in Obama’s Farewell: Automation Can Divide Us — from nytimes.com by Claire Cain Miller

Excerpt:

Underneath the nostalgia and hope in President Obama’s farewell address Tuesday night was a darker theme: the struggle to help the people on the losing end of technological change.

“The next wave of economic dislocations won’t come from overseas,” Mr. Obama said. “It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.”


Artificial Intelligence, Automation, and the Economy — from whitehouse.gov by Kristin Lee

Summary:
[On 12/20/16], the White House released a new report on the ways that artificial intelligence will transform our economy over the coming years and decades.

 Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:

    Positive contributions to aggregate productivity growth;
Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
Churning of the job market as some jobs disappear while others are created; and
The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.


 

Sydney – The Opera House has joined forces with Samsung to open a new digital lounge that encourages engagement with the space. — from lsnglobal.com by Rhiannon McGregor

 

The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)

 

 

The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)

 

 

Also see:

The Lounge enabled by Samsung
Open day and night, The Lounge enabled by Samsung is a new place in the heart of the Opera House where people can sit and enjoy art and culture through the latest technology. The most recent in a series of future-facing projects enabled by Sydney Opera House’s Principal Partner, Samsung, the new visitor lounge features stylish, comfortable seating, as well as interactive displays and exclusive digital content, including:

  • The Sails – a virtual-reality experience of what it’s like to stand atop the sails of Australia’s most famous building, brought to you via Samsung Gear VR;
  • Digital artwork – a specially commissioned video exploration of the Opera House and its stories, produced by creative director Sam Doust. The artwork has been themed to match the time of day and is the first deployment of Samsung’s latest Smart LED Display panel technology in Australia; and
  • Google Cultural Institute – available to view on Samsung Galaxy View and Galaxy Tab S2 tablets, the digital collection features 50 online exhibits that tell the story of the Opera House’s past, present and future through rare archival photography, celebrated performances, early architectural drawings and other historical documents, little-known interviews and Street View imagery.

 

 

 

Six trends that will make business more intelligent in 2017 — from itproportal.com by Daniel Fallmann
The business world is in the midst of a digital transformation that is quickly separating the wheat from the chaff.

 

Also see:

 

 

 

 

 

 
© 2025 | Daniel Christian