Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 

(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #VoiceRecognition |  #CognitiveComputing
#SmartClassrooms  |  #LearningSpaces  |#Collaboration |  #Meetings 


 

 

 

 

 

Adobe unveils new Microsoft HoloLens and Amazon Alexa integrations — from geekwire.com by Nat Levy

 

 

 

 

Introducing the AR Landscape — from medium.com by Super Ventures
Mapping out the augmented reality ecosystem

 

 

 

 

Alibaba leads $18M investment in car navigation augmented reality outfit WayRay — from siliconangle.com by Kyt Dotson

Excerpt:

WayRay boasts the 2015 launch of Navion, what it calls the “first ever holographic navigator” for cars that uses AR technology to project a Global Positioning System, or GPS, info overlay onto the car’s windshield.

Just like a video game, users of the GPS need only follow green arrows projected as if onto the road in front of the car providing visual directions. More importantly, because the system displays on the windscreen, it does not require a cumbersome headset or eyewear worn by the driver. It integrates directly into the dashboard of the car.

The system also recognizes simple voice and gesture commands from the driver — eschewing turning of knobs or pressing buttons. The objective of the system is to allow the driver to spend more time paying attention to the road, with hands on the wheel. Many modern-day onboard GPS systems also recognize voice commands but require the driver to glance over at a screen.

 

 

Viro Media Is A Tool For Creating Simple Mobile VR Apps For Businesses — from uploadvr.com by Charles Singletary

Excerpt:

Viro Media is supplying a platform of their own and their hope is to be the simplest experience where companies can code once and have their content available on multiple mobile platforms. We chatted with Viro Media CEO Danny Moon about the tool and what creators can expect to accomplish with it.

 

 

Listen to these podcasts to dive into virtual reality — from haptic.al by Deniz Ergürel
We curated some great episodes with our friends at RadioPublic

Excerpt:

Virtual reality can transport us to new places, where we can experience new worlds and people, like no other. It is a whole new medium poised to change the future of gaming, education, health care and enterprise. Today we are starting a new series to help you discover what this new technology promises. With the help of our friends at RadioPublic, we are curating a quick library of podcasts related to virtual reality technology.

 

Psychologists using virtual reality to help treat PTSD in veterans — from kxan.com by Amanda Brandeis

Excerpt:

AUSTIN (KXAN) — Virtual reality is no longer reserved for entertainment and gamers, its helping solve real-world problems. Some of the latest advancements are being demonstrated at South by Southwest.

Dr. Skip Rizzo directs the Medical Virtual Reality Lab at the University of Southern California’s Institute for Creative Technologies. He’s helping veterans who suffer from post-traumatic stress disorder (PTSD). He’s up teamed with Dell to develop and spread the technology to more people.

 

 

 

NVIDIA Jetson Enables Artec 3D, Live Planet to Create VR Content in Real Time — from blogs.nvidia.com
While VR revolutionizes fields across everyday life — entertainment, medicine, architecture, education and product design — creating VR content remains among its biggest challenges.

Excerpt:

At NVIDIA Jetson TX2 launch [on March 7, 2017], in San Francisco, [NVIDIA] showed how the platform not only accelerates AI computing, graphics and computer vision, but also powers the workflows used to create VR content. Artec 3D debuted at the event the first handheld scanner offering real-time 3D capture, fusion, modeling and visualization on its own display or streamed to phones and tablets.

 

 

Project Empathy
A collection of virtual reality experiences that help us see the world through the eyes of another

Excerpt:

Benefit Studio’s virtual reality series, Project Empathy is a collection of thoughtful, evocative and surprising experiences by some of the finest creators in entertainment, technology and journalism.

Each film is designed to create empathy through a first-person experience–from being a child inside the U.S. prison system to being a widow cast away from society in India.  Individually, each of the films in this series presents its filmmaker’s unique vision, portraying an intimate experience through the eyes of someone whose story has been lost or overlooked and yet is integral to the larger story of our global society. Collectively, these creatively distinct films weave together a colorful tapestry of what it means to be human today.

 

 

 

 

Work in a high-risk industry? Virtual reality may soon become part of routine training — from ibtimes.cok.uk by Owen Hughes
Immersive training videos could be used to train workers in construction, mining and nuclear power.

 

 

 

At Syracuse University, more students are getting ahold of virtual reality — from dailyorange.com by Haley Kim

 

 

 

As Instructors Experiment With VR, a Shift From ‘Looking’ to ‘Interacting’ — from edsurge.com by Marguerite McNeal

Excerpt:

Most introductory geology professors teach students about earthquakes by assigning readings and showing diagrams of tectonic plates and fault lines to the class. But Paul Low is not most instructors.

“You guys can go wherever you like,” he tells a group of learners. “I’m going to go over to the epicenter and fly through and just kind of get a feel.”

Low is leading a virtual tour of the Earth’s bowels, directly beneath New Zealand’s south island, where a 7.8 magnitude earthquake struck last November. Outfitted with headsets and hand controllers, the students are “flying” around the seismic hotbed and navigating through layers of the Earth’s surface.

Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education.

 

 

 

These University Courses Are Teaching Students the Skills to Work in VR — from uploadvr.com

Excerpt:

“As virtual reality moves more towards the mainstream through the development of new, more affordable consumer technologies, a way needs to be found for students to translate what they learn in academic situations into careers within the industry,” says Frankie Cavanagh, a lecturer at Northumbria University. He founded a company called Somniator last year with the aim not only of developing VR games, but to provide a bridge between higher education and the technology sector. Over 70 students from Newcastle University, Northumbria University and Gateshead College in the UK have been placed so far through the program, working on real games as part of their degrees and getting paid for additional work commissioned.

 

Working with VR already translates into an extraordinarily diverse range of possible career paths, and those options are only going to become even broader as the industry matures in the next few years.

 

 

Scope AR Brings Live, Interactive AR Video Support to Caterpillar Customers — from augmented.reality.news by Tommy Palladino

Excerpt:

Customer service just got a lot more interesting. Construction equipment manufacturer Caterpillar just announced official availability of what they’re calling the CAT LIVESHARE solution to customer support, which builds augmented reality capabilities into the platform. They’ve partnered with Scope AR, a company who develops technical support and training documentation tools using augmented reality. The CAT LIVESHARE support system uses Scope AR’s Remote AR software as the backbone.

 

 

 

New virtual reality tool helps architects create dementia-friendly environments — from dezzen.com by Jessica Mairs

 

Visual showing appearance of a room without and with the Virtual Reality Empathy Platform headset

 

 

 

 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

It’s Here! Get the 2017 NMC Horizon Report

Earlier this week, the New Media Consortium (NMC) and the EDUCAUSE Learning Initiative (ELI) jointly released the NMC Horizon Report > 2017 Higher Education Edition at the 2017 ELI Annual Meeting. This 14th edition describes annual findings from the NMC Horizon Project, an ongoing research project designed to identify and describe emerging technologies likely to have an impact on learning, teaching, and creative inquiry in higher education. Six key trends, six significant challenges, and six important developments in educational technology are placed directly in the context of their likely impact on the core missions of universities and colleges.

 

 

The topics are summarized in the infographic below:

 

 

 

“The world’s first smart #AugmentedReality for the Connected Home has arrived.  — from thunderclap.it

From DSC:
Note this new type of Human Computer Interaction (HCI). I think that we’ll likely be seeing much more of this sort of thing.

 

Excerpt (emphasis DSC):

How is Hayo different?
AR that connects the magical and the functional:

Unlike most AR integrations, Hayo removes the screens from smarthome use and transforms the objects and spaces around you into a set of virtual remote controls. Hayo empowers you to create experiences that have previously been limited by the technology, but now are only limited by your imagination.

Screenless IoT:
The best interface is no interface at all. Aside from the one-time setup Hayo does not use any screens. Your real-life surfaces become the interface and you, the user, become the controls. Virtual remote controls can be placed wherever you want for whatever you need by simply using your Hayo device to take a 3D scan of your space.

Smarter AR experience:
Hayo anticipates your unique context, passive motion and gestures to create useful and more unique controls for the connected home. The Hayo system learns your behaviors and uses its AI to help meet your needs.

 

 

 

 

Also see:

 

 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

From DSC:
Interactive video — a potentially very powerful medium to use, especially for blended and online-based courses or training-related materials! This interactive piece from Heineken is very well done, even remembering how you answered and coming up with their evaluation of you from their 12-question “interview.”

But notice again, a TEAM of specialists are needed to create such a piece. Neither a faculty member, a trainer, nor an instructional designer can do something like this all on their own. Some of the positions I could imagine here are:

  • Script writer(s)
  • Editor(s)
  • Actors and actresses
  • Those skilled in stage lighting and sound / audio recording
  • Digital video editors
  • Programmers
  • Graphic designers
  • Web designers
  • Producers
  • Product marketers
  • …and perhaps others

This is the kind of work that I wish we saw more of in the world of online and blended courses!  Also, I appreciated their use of humor. Overall, a very engaging, fun, and informative piece!

 

heineken-interactive-video-cover-sep2016

 

heineken-interactive-video-first-sep2016

 

heineken-interactive-video0-sep2016

 

heineken-interactive-video1-sep2016

 

heineken-interactive-video2-sep2016

 

heineken-interactive-video3-sep2016

 

 

 

Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 

From DSC:
If the future of TV is apps > and if bots are the new apps > does that mean that the future of TV is bots…?

 

SAN FRANCISCO, CA - SEPTEMBER 9: Apple CEO Tim Cook introduces the New Apple TV during a Special Event at Bill Graham Civic Auditorium September 9, 2015 in San Francisco, California. Apple Inc. is expected to unveil latest iterations of its smart phone, forecasted to be the 6S and 6S Plus. The tech giant is also rumored to be planning to announce an update to its Apple TV set-top box. (Photo by Stephen Lam/ Getty Images)

 

 

bots-are-loving-you-9-638

 

 

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

How the ‘Internet of Things’ will affect the world — from businessinsider.com by John Greenough and Jonathan Camhi

Excerpt:

Here are some key points from the report:

  • In total, we project there will be 34 billion devices connected to the internet by 2020, up from 10 billion in 2015. IoT devices will account for 24 billion, while traditional computing devices (e.g. smartphones, tablets, smartwatches, etc.) will comprise 10 billion.
  • Nearly $6 trillion will be spent on IoT solutions over the next five years.
  • Businesses will be the top adopter of IoT solutions. They see three ways the IoT can improve their bottom line by 1) lowering operating costs; 2) increasing productivity; and 3) expanding to new markets or developing new product offerings.
  • Governments are focused on increasing productivity, decreasing costs, and improving their citizens’ quality of life. We believe they will be the second-largest adopters of IoT ecosystems.
  • Consumers will lag behind businesses and governments in IoT adoption. Still, they will purchase a massive number of devices and invest a significant amount of money in IoT ecosystems.

 

 

 

 

As IoT emerges, UX takes on greater urgency within enterprises — from zdnet.com by Joe McKendrick
User experience isn’t just a luxury — emerging Internet of Things, wearable, mobile, and virtual reality-based computing make it essential.

Excerpt:

Perhaps the renowned techno-futurist Mark Pesce said it best in a recent tweet: “I’d like to point out that [virtual reality] is the only display layer with profound UX implications. Fundamental ones. Ponder that,”

Pesce is spot on, of course, but we need to take his argument for heightened, immersive and interactive user experience a step further — not only is VR bringing it to the fore, but there is also the intense emphasis on connecting things — including wearables and mobile devices — into the emerging Internet of Things, as well as the ongoing challenges of the consumerization of IT, to make enterprise computing as intuitive and satisfying as consumer-based computing.

A recent survey of 7,725 executives and professionals shows that interest in UX — and UX testing — is on the rise. As summarized by Dennis McCafferty in CIO Insight, “respondents say that multi-device interaction–a.k.a., machine-to-machine tech–will greatly influence UX over the next five years. Which means CIOs and their tech teams should expect to devote more attention to UX for the indefinite future.”

 

 

 

The Internet of Things: It’s not quite here yet, but it’s definitely coming — from edtechmagazine.com by Nicci Fagan
Colleges and universities should take steps now to prepare for the impending barrage of connected devices and for the rise in IoT data.

 

 

 

Gartner warns of coming IoT data management overload — from readwrite.com by Donal Power

Excerpt:

The growth of Internet of Things (IoT) is rapidly increasing the amount of data generated, and industry experts warn that the current river of unstructured data will soon turn into a flood. Alarmingly, a recent study highlighted concerns that most proposed approaches could lead to data management overload ineffective for the coming torrent of data.

Enterprise Tech cited a recent Gartner report that examined the impact IoT will have on enterprise infrastructure. The report warned that “due to a lack of information capabilities adapted for the IoT” an estimated 25% of attempts to utilize IoT data will be abandoned before deployment ever occurs.

 

 

 

State-of-Content-Adobe-2016

 

Excerpt:

RULE #1: DESIGN FOR THE MULTISCREEN REALITY
Consumers are multiscreening more than ever before — optimize your content for it

  • On average, 83 percent of global consumers report they multiscreen, using 2.23 devices at the same time.

 

Design-Multiscreenreality-March2016

 

 

As attention spans shrink, 59 percent of consumers globally would rather engage with content that’s beautifully designed than simple — even when short on time.

 

 

More than half of consumers (57%) would prefer to watch videos on breaking news vs. read an article and 63 percent would rather skim several short stories than read deeper articles.

 

Globally, consumers use an average of 5 devices and 10 services

 

 

 

 
 

MicrosoftHololensDevelopmentKit-March2016

 

Introducing first ever experiences for the Microsoft HoloLens Development Edition — from blogs.windows.com by Kudo Tsunoda

Excerpt:

I am super excited about today’s announcement that the Microsoft HoloLens Development Edition is available for pre-order. We set out on a mission to deliver the world’s first untethered holographic computer and it is amazing to finally be at this point in time where developers will be receiving the very first versions so they can start building their own holographic experiences.

With HoloLens, we are committed to providing the development community with the best experience possible. In order to help get developers started creating experiences for HoloLens, we’ve provided a number of great resources. First of all, there is a complete set of documentation provided to developers both by the people who have created the platform and by the people who have been building holographic experiences. We want to share all of our holographic knowledge with developers so they can start bringing their holographic dreams to reality as easily as possible. We have also provided a host of tutorial videos to help people along. All of the documentation and videos can be found at dev.windows.com/holographic.

 

 

MicrosoftHololensDevelopmentKit2-March2016

 

 

 
© 2025 | Daniel Christian