The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

 

The Stanford Virtual Heart – Lucile Packard Children’s Hospital Stanford

Published on Mar 28, 2017
Pediatric cardiologists are using immersive virtual reality (VR) technology to explain complex congenital heart defects, which are some of the most difficult medical conditions to teach and understand.

 

 

 

 

From DSC:
Surgeons could also use this type of application to better explain — and help visualize — problems for their patients.

 

 

 

 

The 82 Hottest EdTech Tools of 2017 According to Education Experts — from tutora.co.uk by Giorgio Cassella

Excerpt:

If you work in education, you’ll know there’s a HUGE array of applications, services, products and tools created to serve a multitude of functions in education.

Tools for teaching and learning, parent-teacher communication apps, lesson planning software, home-tutoring websites, revision blogs, SEN education information, professional development qualifications and more.

There are so many companies creating new products for education, though, that it can be difficult to keep up – especially with the massive volumes of planning and marking teachers have to do, never mind finding the time to actually teach!

So how do you know which ones are the best?

Well, as a team of people passionate about education and learning, we decided to do a bit of research to help you out.

We’ve asked some of the best and brightest in education for their opinions on the hottest EdTech of 2017. These guys are the real deal – experts in education, teaching and new tech from all over the world from England to India, to New York and San Francisco.

They’ve given us a list of 82 amazing, tried and tested tools…


From DSC:
The ones that I mentioned that Giorgio included in his excellent article were:

  • AdmitHub – Free, Expert College Admissions Advice
  • Labster – Empowering the Next Generation of Scientists to Change the World
  • Unimersiv – Virtual Reality Educational Experiences
  • Lifeliqe – Interactive 3D Models to Augment Classroom Learning

 


 

 

 

 

[On 4/3/17] the World’s First Live Hologram Phone Call was made between Seoul and New Jersey on a 5G Network — from patentlyapple.com

Excerpt:

[On 4/3/17] a little history was made. Verizon and Korean Telecom (KT) unveiled the world’s first live hologram international call service via the companies’ trial 5G networks established in Seoul and in New Jersey, respectively. Our cover graphic shows Verizon CEO Lowell McAdam (left) and KT CEO Hwang Chang-gyu demonstrate a hologram video call on a tablet PC at the KT headquarters in central Seoul Monday.

In the demonstration, a KT employee held a meeting with a Verizon employee in New Jersey who appeared as a hologram image on a monitor in the KT headquarters building.

 

With today’s revelations from South Korea, it’s easy to imagine that we’ll see Apple’s FaceTime offer a holographic experience in the not-too-distant future with added AR experiences as Apple’s CEO has conveyed.

 

 

 

 

The Hidden Costs of Active Learning — from by Thomas Mennella
Flipped and active learning truly are a better way for students to learn, but they also may be a fast track to instructor burnout.

Excerpt:

The time has come for us to have a discussion about the hidden cost of active learning in higher education. Soon, gone will be the days of instructors arriving to a lecture hall, delivering a 75-minute speech and leaving. Gone will be the days of midterms and finals being the sole forms of assessing student learning. For me, these days have already passed, and good riddance. These are largely ineffective teaching and learning strategies. Today’s college classroom is becoming dynamic, active and student-centered. Additionally, the learning never stops because the dialogue between student and instructor persists endlessly over the internet. Trust me when I say that this can be exhausting. With constant ‘touch-points,’ ‘personalized learning opportunities’ and the like, the notion of a college instructor having 12 contact hours per week that even remotely total 12 hours is beyond unreasonable.

We need to reevaluate how we measure, assign and compensate faculty teaching loads within an active learning framework. We need to recognize that instructors teaching in these innovative ways are doing more, and spending more hours, than their more traditional colleagues. And we must accept that a failure to recognize and remedy these ‘new normals’ risks burning out a generation of dedicated and passionate instructors. Flipped learning works and active learning works, but they’re very challenging ways to teach. I still say I will never teach another way again … I’m just not sure for how much longer that can be.

 

From DSC:
The above article prompted me to revisit the question of how we might move towards using more team-based approaches…? Thomas Mennella seems to be doing an incredible job — but grading 344 assignments each week or 3,784 assignments this semester is most definitely a recipe for burnout.

Then, pondering this situation, an article came to my mind that discusses Thomas Frey’s prediction that the largest internet-based company of 2030 will be focused on education.

I wondered…who will be the Amazon.com of the future of education? 

Such an organization will likely utilize a team-based approach to create and deliver excellent learning experiences — and will also likely leverage the power of artificial intelligence/machine learning/deep learning as a piece of their strategy.

 

 

 

 

 

 

The Best Amazon Alexa Skills — from in.pcmag.com by Eric Griffith

Example skills:

 

WebMD

 

 

5 Alexa skills to try this week — from venturebeat.com by Khari Johnson

Excerpt:

Below are five noteworthy Amazon Alexa skills worth trying, chosen from New, Most Enabled Skills, Food and Drink, and Customer Favorites categories in the Alexa Skills Marketplace.

 

From DSC:
I’d like to see how the Verse of the Day skill performs.

 

 

 


Also see:


 

 


From DSC:
This topic reminds me of a slide from
my NGLS 2017 Conference presentation:

 

 


 

 

Samsung’s personal assistant Bixby will take on Amazon Alexa, Apple Siri — from theaustralian.com.au by Chris Griffith

Excerpt:

Samsung has published details of its Bixby personal assistant, which will debut on its Galaxy S8 smartphone in New York next week.

Bixby will go head-to-head with Google Assistant, Microsoft Cortana, Amazon Echo and Apple Siri, in a battle to lure you into their artificial intelligence world.

In future, the personal assistant that you like may not only influence which phone you buy, also the home automation system that you adopt.

This is because these personal assistants cross over into home use, which is why Samsung would bother with one of its own.

Given that the S8 will run Android Nougat, which includes Google Assistant, users will have two personal assistants on their phone, unless somehow one is disabled.

 

 

There are a lot of red flags with Samsung’s AI assistant in the new Galaxy S8 — from businessinsider.com by Steve Kovach

Excerpt:

There’s Siri. And Alexa. And Google Assistant. And Cortana. Now add another one of those digital assistants to the mix: Bixby, the new helper that lives inside Samsung’s latest phone, the Galaxy S8. But out of all the assistants that have launched so far, Bixby is the most curious and the most limited.

Samsung’s goal with Bixby was to create an assistant that can mimic all the functions you’re used to performing by tapping on your screen through voice commands. The theory is that phones are too hard to manage, so simply letting users tell their phone what they want to happen will make things a lot easier.

 

 

Samsung Galaxy S8: Hands on with the world’s most ambitious phone — from telegraph.co.uk by James Titcomb

Excerpt:

The S8 will also feature Bixby, Samsung’s new intelligent assistant. The company says Bixby is a bigger deal than Siri or Google Assistant – as well as simply asking for the weather, it will be deeply integrated with the phone’s everyday functions such as taking photos and sending them to people. Samsung has put a dedicated Bixby button on the S8 on the left hand side, but I wasn’t able to try it out because it won’t launch in the UK until later this year.

 

 

Samsung Galaxy S8 launch: Samsung reveals its long-awaited iPhone killer — from telegraph.co.uk by James Titcomb

 

 

 


Also see:


 

Recent years have brought some rapid development in the area of artificially intelligent personal assistants. Future iterations of the technology could fully revamp the way we interact with our devices.

 

 

 
 

21 bot experts make their predictions for 2017 — from venturebeat.com by Adelyn Zhou

Excerpt:

2016 was a huge year for bots, with major platforms like Facebook launching bots for Messenger, and Amazon and Google heavily pushing their digital assistants. Looking forward to 2017, we asked 21 bot experts, entrepreneurs, and executives to share their predictions for how bots will continue to evolve in the coming year.

From Jordi Torras, founder and CEO, Inbenta:
“Chatbots will get increasingly smarter, thanks to the adoption of sophisticated AI algorithms and machine learning. But also they will specialize more in specific tasks, like online purchases, customer support, or online advice. First attempts of chatbot interoperability will start to appear, with generalist chatbots, like Siri or Alexa, connecting to specialized enterprise chatbots to accomplish specific tasks. Functions traditionally performed by search engines will be increasingly performed by chatbots.”

 

 

 

 

 


From DSC:
For those of us working within higher education, chatbots need to be on our radars. Here are 2 slides from my NGLS 2017 presentation.

 

 

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 

(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #VoiceRecognition |  #CognitiveComputing
#SmartClassrooms  |  #LearningSpaces  |#Collaboration |  #Meetings 


 

 

 

 

 

Adobe unveils new Microsoft HoloLens and Amazon Alexa integrations — from geekwire.com by Nat Levy

 

 

 

 

Introducing the AR Landscape — from medium.com by Super Ventures
Mapping out the augmented reality ecosystem

 

 

 

 

Alibaba leads $18M investment in car navigation augmented reality outfit WayRay — from siliconangle.com by Kyt Dotson

Excerpt:

WayRay boasts the 2015 launch of Navion, what it calls the “first ever holographic navigator” for cars that uses AR technology to project a Global Positioning System, or GPS, info overlay onto the car’s windshield.

Just like a video game, users of the GPS need only follow green arrows projected as if onto the road in front of the car providing visual directions. More importantly, because the system displays on the windscreen, it does not require a cumbersome headset or eyewear worn by the driver. It integrates directly into the dashboard of the car.

The system also recognizes simple voice and gesture commands from the driver — eschewing turning of knobs or pressing buttons. The objective of the system is to allow the driver to spend more time paying attention to the road, with hands on the wheel. Many modern-day onboard GPS systems also recognize voice commands but require the driver to glance over at a screen.

 

 

Viro Media Is A Tool For Creating Simple Mobile VR Apps For Businesses — from uploadvr.com by Charles Singletary

Excerpt:

Viro Media is supplying a platform of their own and their hope is to be the simplest experience where companies can code once and have their content available on multiple mobile platforms. We chatted with Viro Media CEO Danny Moon about the tool and what creators can expect to accomplish with it.

 

 

Listen to these podcasts to dive into virtual reality — from haptic.al by Deniz Ergürel
We curated some great episodes with our friends at RadioPublic

Excerpt:

Virtual reality can transport us to new places, where we can experience new worlds and people, like no other. It is a whole new medium poised to change the future of gaming, education, health care and enterprise. Today we are starting a new series to help you discover what this new technology promises. With the help of our friends at RadioPublic, we are curating a quick library of podcasts related to virtual reality technology.

 

Psychologists using virtual reality to help treat PTSD in veterans — from kxan.com by Amanda Brandeis

Excerpt:

AUSTIN (KXAN) — Virtual reality is no longer reserved for entertainment and gamers, its helping solve real-world problems. Some of the latest advancements are being demonstrated at South by Southwest.

Dr. Skip Rizzo directs the Medical Virtual Reality Lab at the University of Southern California’s Institute for Creative Technologies. He’s helping veterans who suffer from post-traumatic stress disorder (PTSD). He’s up teamed with Dell to develop and spread the technology to more people.

 

 

 

NVIDIA Jetson Enables Artec 3D, Live Planet to Create VR Content in Real Time — from blogs.nvidia.com
While VR revolutionizes fields across everyday life — entertainment, medicine, architecture, education and product design — creating VR content remains among its biggest challenges.

Excerpt:

At NVIDIA Jetson TX2 launch [on March 7, 2017], in San Francisco, [NVIDIA] showed how the platform not only accelerates AI computing, graphics and computer vision, but also powers the workflows used to create VR content. Artec 3D debuted at the event the first handheld scanner offering real-time 3D capture, fusion, modeling and visualization on its own display or streamed to phones and tablets.

 

 

Project Empathy
A collection of virtual reality experiences that help us see the world through the eyes of another

Excerpt:

Benefit Studio’s virtual reality series, Project Empathy is a collection of thoughtful, evocative and surprising experiences by some of the finest creators in entertainment, technology and journalism.

Each film is designed to create empathy through a first-person experience–from being a child inside the U.S. prison system to being a widow cast away from society in India.  Individually, each of the films in this series presents its filmmaker’s unique vision, portraying an intimate experience through the eyes of someone whose story has been lost or overlooked and yet is integral to the larger story of our global society. Collectively, these creatively distinct films weave together a colorful tapestry of what it means to be human today.

 

 

 

 

Work in a high-risk industry? Virtual reality may soon become part of routine training — from ibtimes.cok.uk by Owen Hughes
Immersive training videos could be used to train workers in construction, mining and nuclear power.

 

 

 

At Syracuse University, more students are getting ahold of virtual reality — from dailyorange.com by Haley Kim

 

 

 

As Instructors Experiment With VR, a Shift From ‘Looking’ to ‘Interacting’ — from edsurge.com by Marguerite McNeal

Excerpt:

Most introductory geology professors teach students about earthquakes by assigning readings and showing diagrams of tectonic plates and fault lines to the class. But Paul Low is not most instructors.

“You guys can go wherever you like,” he tells a group of learners. “I’m going to go over to the epicenter and fly through and just kind of get a feel.”

Low is leading a virtual tour of the Earth’s bowels, directly beneath New Zealand’s south island, where a 7.8 magnitude earthquake struck last November. Outfitted with headsets and hand controllers, the students are “flying” around the seismic hotbed and navigating through layers of the Earth’s surface.

Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education.

 

 

 

These University Courses Are Teaching Students the Skills to Work in VR — from uploadvr.com

Excerpt:

“As virtual reality moves more towards the mainstream through the development of new, more affordable consumer technologies, a way needs to be found for students to translate what they learn in academic situations into careers within the industry,” says Frankie Cavanagh, a lecturer at Northumbria University. He founded a company called Somniator last year with the aim not only of developing VR games, but to provide a bridge between higher education and the technology sector. Over 70 students from Newcastle University, Northumbria University and Gateshead College in the UK have been placed so far through the program, working on real games as part of their degrees and getting paid for additional work commissioned.

 

Working with VR already translates into an extraordinarily diverse range of possible career paths, and those options are only going to become even broader as the industry matures in the next few years.

 

 

Scope AR Brings Live, Interactive AR Video Support to Caterpillar Customers — from augmented.reality.news by Tommy Palladino

Excerpt:

Customer service just got a lot more interesting. Construction equipment manufacturer Caterpillar just announced official availability of what they’re calling the CAT LIVESHARE solution to customer support, which builds augmented reality capabilities into the platform. They’ve partnered with Scope AR, a company who develops technical support and training documentation tools using augmented reality. The CAT LIVESHARE support system uses Scope AR’s Remote AR software as the backbone.

 

 

 

New virtual reality tool helps architects create dementia-friendly environments — from dezzen.com by Jessica Mairs

 

Visual showing appearance of a room without and with the Virtual Reality Empathy Platform headset

 

 

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

Expect voice, VR and AR to dominate UX design — from forbes.com by the Forbes Technology Council

Excerpt:

User interfaces have come a long way since the design of typewriters to prevent people from typing too quickly and then jamming the device. Current technology has users viewing monitors and wiggling mouse, or tapping on small touchscreens to activate commands or to interact with a virtual keyboard. But is this the best method of interaction?

Designers are asking themselves if it [is] better to talk to a mobile device to get information, or should a wearable vibrate and then feed information into an augmented reality display. Is having an artificial intelligence modify an interface on the fly, depending on how a user interacts, the best course of action for applications or websites? And how human should the AIs’ interaction be with users?

Eleven experts on the Forbes Technology Council offer their predictions on how UX design will be changing in the next few years. Here’s what they have to say…

 

 

 

 
© 2025 | Daniel Christian