KPMG & Microsoft Announce New “Blockchain Nodes” — from finance.yahoo.com

Excerpt:

NEW YORK, Feb. 15, 2017 /PRNewswire/ — KPMG International and Microsoft Corp. have announced the launch of joint Blockchain Nodes, which are designed to create and demonstrate use cases that apply blockchain technology to business propositions and processes.  The first joint Blockchain Nodes are in Frankfurt and Singapore, with future plans for a location in New York.

The KPMG and Microsoft Blockchain Nodes –innovation workspaces– will expand on a global alliance, which combines Microsoft’s technical expertise with KPMG’s deep industry and blockchain application knowledge, together with strong connections to the start-up and developer communities.

“The Blockchain Nodes will play a critical role in identifying new applications and use cases that blockchain can address,” said Eamonn Maguire, global and US leader for KPMG’s Digital Ledger Services. “They will enable us to work directly with clients to discover and test ideas based on market insights, creating and implementing prototype solutions that use this innovative technology.”

 

 

IBM Brings Machine Learning to the Private Cloud — from finance.yahoo.com
First to automate creation and training of learning analytic models at the source of high value corporate data, starting with IBM z System Mainframe

Excerpt:

ARMONK, N.Y., Feb. 15, 2017 /PRNewswire/ — IBM (NYSE: IBM) today announced IBM Machine Learning, the first cognitive platform for continuously creating, training and deploying a high volume of analytic models in the private cloud at the source of vast corporate data stores.  Even using the most advanced techniques, data scientists – in shortest supply among today’s IT skills1 – might spend days or weeks developing, testing and retooling even a single analytic model one step at a time.

IBM has extracted the core machine learning technology from IBM Watson and will initially make it available where much of the world’s enterprise data resides: the z System mainframe, the operational core of global organizations where billions of daily transactions are processed by banks, retailers, insurers, transportation firms and governments.

IBM Machine Learning allows data scientists to automate the creation, training and deployment of operational analytic models that will support…

 

 

Amazon Echo and Google Home may soon be able to make voice calls — from financye.yahoo.com and Business Insider by Jeff Dunn

Excerpt:

The Amazon Echo and Google Home could be used to make and receive phone calls later this year, according to a new report from The Wall Street Journal’s Ryan Knutson and Laura Stevens. Citing “people familiar with the matter,” the report says that both Amazon and Google are looking to activate the feature, but that their attempts have been slowed by privacy and regulatory concerns. Amazon has reportedly been working on Echo-specific voice calls since 2015, but has been held up by “employee turnover” as well.

 

 

Amazon unveils Chime, looks to reinvent the conference call with new Skype and GoToMeeting competitor — from geekwire.com by John Cook

Excerpt:

Amazon is looking to transform just about every industry.

Now, the Seattle tech juggernaut wants to reinvent how you conduct meetings and conference calls.

Amazon Web Services today unveiled Chime, a new service that it says takes the “frustration out of meetings” by delivering video, voice, chat, and screen sharing. Instead of forcing participants to call one another on a dedicated line, Amazon Chime automatically calls all participants at the start of a meeting, so “joining a meeting is as easy as clicking a button in the app, no PIN required,” the company said in a press release. Chime also shows a visual roster of participants, and allows participants to pinpoint who exactly on the call is creating annoying background noise.

 

 

 

 

 

 

The Most Innovative Companies of 2017 — from fastcompany.com

Excerpt:

This year marks the 10th edition of the Fast Company World’s Most Innovative Companies ranking. Our reporting team sifts through thousands of enterprises each year, searching for those that tap both heartstrings and purse strings and use the engine of commerce to make a difference in the world. Impact is among our key criteria.

 

 

 

Speaking of innovation, this article is about innovation within the world of  higher education:

Crafting an Innovation Landscape — from er.educause.edu by Shirley Dugdale and Brian Strawn

Key Takeaways

  • As efforts to stimulate innovation spring up across campuses, institutions need a comprehensive planning framework for integrated planning of initiatives to support innovation.
  • Viewing the campus as an Innovation Landscape, settings for collaborative creative activity — both physical and virtual — infuse the campus fabric and become part of the daily experience of their users.
  • The Innovation Landscape Framework proposed here serves as a tool that can help coordinate physical planning with organizational initiatives, engage a wide range of stakeholders, and enable a culture of innovation across campus.

 

 

 

No hype, just fact: What artificial intelligence is – in simple business terms — from zdnet.com by Michael Krigsman
AI has become one of the great, meaningless buzzwords of our time. In this video, the Chief Data Scientist of Dun and Bradstreet explains AI in clear business terms.

Excerpt:

How do terms like machine learning, AI, and cognitive computing relate to one another?
They’re not synonymous. So, cognitive computing is very different than machine learning, and I will call both of them a type of AI. Just to try and describe those three. So, I would say artificial intelligence is all of that stuff I just described. It’s a collection of things designed to either mimic behavior, mimic thinking, behave intelligently, behave rationally, behave empathetically. Those are the systems and processes that are in the collection of soup that we call artificial intelligence.

Cognitive computing is primarily an IBM term. It’s a phenomenal approach to curating massive amounts of information that can be ingested into what’s called the cognitive stack. And then to be able to create connections among all of the ingested material, so that the user can discover a particular problem, or a particular question can be explored that hasn’t been anticipated.

Machine learning is almost the opposite of that. Where you have a goal function, you have something very specific that you try and define in the data. And, the machine learning will look at lots of disparate data, and try to create proximity to this goal function ? basically try to find what you told it to look for. Typically, you do that by either training the system, or by watching it behave, and turning knobs and buttons, so there’s unsupervised, supervised learning. And that’s very, very different than cognitive computing.

 

 

 

 

 

 

IBM to Train 25 Million Africans for Free to Build Workforce — from by Loni Prinsloo
* Tech giant seeking to bring, keep digital jobs in Africa
* Africa to have world’s largest workforce by 2040, IBM projects

Excerpt:

International Business Machines Corp. is ramping up its digital-skills training program to accommodate as many as 25 million Africans in the next five years, looking toward building a future workforce on the continent. The U.S. tech giant plans to make an initial investment of 945 million rand ($70 million) to roll out the training initiative in South Africa…

 

Also see:

IBM Unveils IT Learning Platform for African Youth — from investopedia.com by Tim Brugger

Excerpt (emphasis DSC):

Responding to concerns that artificial intelligence (A.I.) in the workplace will lead to companies laying off employees and shrinking their work forces, IBM (NYSE: IBM) CEO Ginni Rometty said in an interview with CNBC last month that A.I. wouldn’t replace humans, but rather open the door to “new collar” employment opportunities.

IBM describes new collar jobs as “careers that do not always require a four-year college degree but rather sought-after skills in cybersecurity, data science, artificial intelligence, cloud, and much more.”

In keeping with IBM’s promise to devote time and resources to preparing tomorrow’s new collar workers for those careers, it has announced a new “Digital-Nation Africa” initiative. IBM has committed $70 million to its cloud-based learning platform that will provide free skills development to as many as 25 million young people in Africa over the next five years.

The platform will include online learning opportunities for everything from basic IT skills to advanced training in social engagement, digital privacy, and cyber protection. IBM added that its A.I. computing wonder Watson will be used to analyze data from the online platform, adapt it, and help direct students to appropriate courses, as well as refine the curriculum to better suit specific needs.

 

 

From DSC:
That last part, about Watson being used to personalize learning and direct students to appropropriate courses, is one of the elements that I see in the Learning from the Living [Class]Room vision that I’ve been pulse-checking for the last several years. AI/cognitive computing will most assuredly be a part of our learning ecosystems in the future.  Amazon is currently building their own platform that adds 100 skills each day — and has 1000 people working on creating skills for Alexa.  This type of thing isn’t going away any time soon. Rather, I’d say that we haven’t seen anything yet!

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

And Amazon has doubled down to develop Alexa’s “skills,” which are discrete voice-based applications that allow the system to carry out specific tasks (like ordering pizza for example). At launch, Alexa had just 20 skills, which has reportedly jumped to 5,200 today with the company adding about 100 skills per day.

In fact, Bezos has said, “We’ve been working behind the scenes for the last four years, we have more than 1,000 people working on Alexa and the Echo ecosystem … It’s just the tip of the iceberg. Just last week, it launched a new website to help brands and developers create more skills for Alexa.

Source

 

 

Also see:

 

“We are trying to make education more personalised and cognitive through this partnership by creating a technology-driven personalised learning and tutoring,” Lula Mohanty, Vice President, Services at IBM, told ET. IBM will also use its cognitive technology platform, IBM Watson, as part of the partnership.

“We will use the IBM Watson data cloud as part of the deal, and access Watson education insight services, Watson library, student information insights — these are big data sets that have been created through collaboration and inputs with many universities. On top of this, we apply big data analytics,” Mohanty added.

Source

 

 


 

Also see:

  • Most People in Education are Just Looking for Faster Horses, But the Automobile is Coming — from etale.org by Bernard Bull
    Excerpt:
    Most people in education are looking for faster horses. It is too challenging, troubling, or beyond people’s sense of what is possible to really imagine a completely different way in which education happens in the world. That doesn’t mean, however, that the educational equivalent of the automobile is not on its way. I am confident that it is very much on its way. It might even arrive earlier than even the futurists expect. Consider the following prediction.

 


 

 

 

Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 
 

Here’s how Google made VR history and got its first Oscar nom — from inverse.com by Victor Fuste
Google’s short film ‘Pearl’ marks a major moment in VR history. 

Excerpt:

The team at Google Spotlight Stories made history on Wednesday, as its short film Pearl became the first virtual reality project to be nominated for an Academy Award. But instead of serving as a capstone, the Oscar nod is just a nice moment at the beginning of the Spotlight team’s plan for the future of storytelling in the digital age.

Google Spotlight Stories are not exactly short films. Rather, they are interactive experiences created by the technical pioneers at Google’s Advanced Technologies and Projects (ATAP) division, and they defy expectations and conventions. Film production has in many ways been perfected, but for each Spotlight Story, the technical staff at Google uncovers new challenges to telling stories in a medium that blends together film, mobile phones, games, and virtual reality. Needless to say, it’s been an interesting road.

 

 

A world without work — by Derek Thompson; The Atlantic — from July 2015

Excerpts:

Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.

For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War  II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.

Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.

“Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed”…

“The cultural breakdown matters even more than the economic breakdown.”

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen.

What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both”…

Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury.

Most people do need to achieve things through, yes, work to feel a lasting sense of purpose.

When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit.

What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.

“I can’t stress this enough: this isn’t just about economics; it’s psychological”…

 

 

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

 

 

From DSC:
Though I’m not saying Thompson is necessarily asserting this in his article, I don’t see a world without work as a dream. In fact, as the quote immediately before this paragraph alludes to, I think that most people would not like a life that is devoid of all work. I think work is where we can serve others, find purpose and meaning for our lives, seek to be instruments of making the world a better place, and attempt to design/create something that’s excellent.  We may miss the mark often (I know I do), but we keep trying.

 

 

 

“The world’s first smart #AugmentedReality for the Connected Home has arrived.  — from thunderclap.it

From DSC:
Note this new type of Human Computer Interaction (HCI). I think that we’ll likely be seeing much more of this sort of thing.

 

Excerpt (emphasis DSC):

How is Hayo different?
AR that connects the magical and the functional:

Unlike most AR integrations, Hayo removes the screens from smarthome use and transforms the objects and spaces around you into a set of virtual remote controls. Hayo empowers you to create experiences that have previously been limited by the technology, but now are only limited by your imagination.

Screenless IoT:
The best interface is no interface at all. Aside from the one-time setup Hayo does not use any screens. Your real-life surfaces become the interface and you, the user, become the controls. Virtual remote controls can be placed wherever you want for whatever you need by simply using your Hayo device to take a 3D scan of your space.

Smarter AR experience:
Hayo anticipates your unique context, passive motion and gestures to create useful and more unique controls for the connected home. The Hayo system learns your behaviors and uses its AI to help meet your needs.

 

 

 

 

Also see:

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

Apple iPhone 8 To Get 3D-Sensing Tech For Augmented-Reality Apps — from investors.com by Patrick Seitz

Excerpt:

Apple’s (AAPL) upcoming iPhone 8 smartphone will include a 3D-sensing module to enable augmented-reality applications, Rosenblatt Securities analyst Jun Zhang said Wednesday. Apple has included the 3D-sensing module in all three current prototypes of the iPhone 8, which have screen sizes of 4.7, 5.1 and 5.5 inches, he said. “We believe Apple’s 3D sensing might provide a better user experience with more applications,” Zhang said in a research report. “So far, we think 3D sensing aims to provide an improved smartphone experience with a VR/AR environment.”

Apple's iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)Apple’s iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)

 

 

AltspaceVR Education Overview

 

 

 

 

10 Prominent Developers Detail Their 2017 Predictions for The VR/AR Industry — from uploadvr.com by David Jagneaux

Excerpt:

As we look forward to 2017 then, we’ve reached out to a bunch of industry experts and insiders to get their views on where we’re headed over the next 12 months.

2016 provided hints of where Facebook, HTC, Sony, Google, and more will take their headsets in the near future, but where does the industry’s best and brightest think we’ll end up this time next year? With CES, the year’s first major event, now in the books, let’s hear from some those that work with VR itself about what happens next.

We asked all of these developers the same four questions:

1) What do you think will happen to the VR/AR market in 2017?
2) What NEEDS to happen to the VR AR market in 2017?
3) What will be the big breakthroughs and innovations of 2017?
4) Will 2017 finally be the “year of VR?”

 

 

MEL Lab’s Virtual Reality Chemistry Class — from thereisonlyr.com by Grant Greene
An immersive learning startup brings novel experiences to science education.

 

 

The MEL app turned my iPhone 6 into a virtual microscope, letting me walk through 360 degree, 3-D representations of the molecules featured in the experiment kits.

 

 

 

 

Labster releases ‘World of Science’ Simulation on Google Daydream — from labster.com by Marian Reed

Excerpt:

Labster is exploring new platforms by which students can access its laboratory simulations and is pleased to announce the release of its first Google Daydream-compatible virtual reality (VR) simulation, ‘Labster: World of Science’. This new simulation, modeled on Labster’s original ‘Lab Safety’ virtual lab, continues to incorporate scientific learning alongside of a specific context, enriched by story-telling elements. The use of the Google VR platform has enabled Labster to fully immerse the student, or science enthusiast, in a wet lab that can easily be navigated with intuitive usage of Daydream’s handheld controller.

 

 

The Inside Story of Google’s Daydream, Where VR Feels Like Home — from wired.com by David Pierce

Excerpt:

Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things.

 

 

Welcoming more devices to the Daydream-ready family — from blog.google.com by Amit Singh

Excerpt:

In November, we launched Daydream with the goal of bringing high quality, mobile VR to everyone. With the Daydream View headset and controller, and a Daydream-ready phone like the Pixel or Moto Z, you can explore new worlds, kick back in your personal VR cinema and play games that put you in the center of the action.

Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics, and high-fidelity sensors for precise head tracking. To give you even more choices to enjoy Daydream, today we’re welcoming new devices that will soon join the Daydream-ready family.

 

 

Kessler Foundation awards virtual reality job interview program — from haptic.al by Deniz Ergürel

Excerpt:

Kessler Foundation, one of the largest public charities in the United States, is awarding a virtual reality training project to support high school students with disabilities. The foundation is providing a two-year, $485,000 Signature Employment Grant to the University of Michigan in Ann Arbor, to launch the Virtual Reality Job Interview Training program. Kessler Foundation says, the VR program will allow for highly personalized role-play, with precise feedback and coaching that may be repeated as often as desired without fear or embarrassment.

 

 

Deep-water safety training goes virtual — from shell.com by Soh Chin Ong
How a visit to a shopping centre led to the use of virtual reality safety training for a new oil production project, Malikai, in the deep waters off Sabah in Malaysia.

 

 

 

ISNS students embrace learning in a world of virtual reality — from by

Excerpt (emphasis DSC):

To give students the skills needed to thrive in an ever more tech-centred world, the International School of Nanshan Shenzhen (ISNS) is one of the world’s first educational facilities now making instruction in virtual reality (VR) and related tools a key part of the curriculum.

Building on a successful pilot programme last summer in Virtual Reality, 3D art and animation, the intention is to let students in various age groups experiment with the latest emerging technologies, while at the same time unleashing their creativity, curiosity and passion for learning.

To this end, the school has set up a special VR innovation lab, conceived as a space for exploration, design and interdisciplinary collaboration involving a number of different subject teachers.

Using relevant software and materials, students learn to create high-quality digital content and to design “experiences” for VR platforms. In this “VR Lab makerspace” – a place offering the necessary tools, resources and support – they get to apply concepts and theories learned in the classroom, develop practical skills, document their progress, and share what they have learned with classmates and other members of the tech education community. 

 

 

As a next logical step, she is also looking to develop contacts with a number of the commercial makerspaces which have sprung up in Shenzhen. The hope is that students will then be able to meet engineers working on cutting-edge innovations and understand the latest developments in software, manufacturing, and areas such as laser cutting, and 3D printing, and rapid prototyping.  

 

 

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 

GE’s Sam Murley scopes out the state of AR and what’s next — from thearea.org

Excerpt (emphasis DSC):

AREA: How would you describe the opportunity for Augmented Reality in 2017?
SAM MURLEY: I think it’s huge — almost unprecedented — and I believe the tipping point will happen sometime this year. This tipping point has been primed over the past 12 to 18 months with large investments in new startups, successful pilots in the enterprise, and increasing business opportunities for providers and integrators of Augmented Reality. During this time, we have witnessed examples of proven implementations – small scale pilots, larger scale pilots, and companies rolling out AR in production — and we should expect this to continue to increase in 2017. You can also expect to see continued growth of assisted reality devices, scalable for industrial use cases such as manufacturing, industrial, and services industries as well as new adoption of mixed reality and augmented reality devices, spatially-aware and consumer focused for automotive, consumer, retail, gaming, and education use cases. We’ll see new software providers emerge, existing companies taking the lead, key improvements in smart eyewear optics and usability, and a few strategic partnerships will probably form.

AREA: Do you have visibility into all the different AR pilots or programs that are going on at GE?
SAM MURLEY:

At the 2016 GE Minds + Machines conference, our Vice President of GE Software Research, Colin Parris, showed off how the Microsoft HoloLens could help the company “talk” to machines and service malfunctioning equipment. It was a perfect example of how Augmented Reality will change the future of work, giving our customers the ability to talk directly to a Digital Twin — a virtual model of that physical asset — and ask it questions about recent performance, anomalies, potential issues and receive answers back using natural language. We will see Digital Twins of many assets, from jet engines to or compressors. Digital Twins are powerful – they allow tweaking and changing aspects of your asset in order to see how it will perform, prior to deploying in the field. GE’s Predix, the operating system for the industrial Internet, makes this cutting-edge methodology possible. “What you saw was an example of the human mind working with the mind of a machine,” said Parris. With Augmented Reality, we are able to empower the workforce with tools that increase productivity, reduce downtime, and tap into the Digital Thread and Predix. With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

 

 

From DSC:
I also believe that the tipping point will happen sometime this year.  I hadn’t heard of the concept of a Digital Twin — but I sense that we’ll be hearing that more often in the future.

 

 

 

With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

 

 

 


From DSC:
I then saw the concept of the “Digital Twin” again out at:

  • Breaking through the screen — from medium.com by Evan Helda
    Excerpt (emphasis DSC ):
    Within the world of the enterprise, this concept of a simultaneous existence of “things” virtually and physically has been around for a while. It is known as the “digital twin”, or sometimes referred to as the “digital tapestry” (will cover this topic in a later post). Well, thanks to the internet and ubiquity of sensors today, almost every “thing” now has a “digital twin”, if you will. These “things” will embody this co-existence, existing in a sense virtually and physically, and all connected in a myriad of ways. The outcome at maturity is something we’ve yet to fully comprehend.

 

 

 
© 2025 | Daniel Christian