Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 

Voice technology may be poised for a breakthrough with Chinese consumers. — from Shepherd Laughlin jwtintelligence.com

Excerpt:

Worldwide, more consumers are interacting with technology using their voices. As the Innovation Group London and Mindshare Futures found in our Speak Easy report, consumers who use voice technology think that it frees them from having to look at screens, helps them organize their lives, and is less mentally draining than traditional touch or typing devices.

Among the many markets where voice technology is catching on, China faces unique challenges and opportunities. The complex Chinese writing system means that current methods of selecting characters using keyboards can be slow and laborious, which suggests that fully functional voice technology would find an instant market. Spoken Chinese, however, has proven difficult for computers to decipher.

But voice technology is moving ahead anyway. 2015 saw the release of the LingLong DingDong, a product created through a partnership between iFlytek and JD.com, which has become known as China’s answer to the Amazon Echo. The device can understand both Mandarin and Cantonese. It plays music, gives directions, answers questions about the weather and the news, and more. The Tmall Genie, a similar product, functions using Alibaba’s voice assistant, AliGenie.

 

 

 

 

2017 Ed Tech Trends: The Halfway Point — from campustechnology.com by Rhea Kelly
Four higher ed IT leaders weigh in on the current state of education technology and what’s ahead.

This article includes some perspectives shared from the following 4 IT leaders:

  • Susan Aldridge, Senior Vice President for Online Learning, Drexel University (PA); President, Drexel University Online
  • Daniel Christian, Adjunct Faculty Member, Calvin College
  • Marci Powell, CEO/President, Marci Powell & Associates; Chair Emerita and Past President, United States Distance Learning Association
  • Phil Ventimiglia, Chief Innovation Officer, Georgia State University

 

 

Also see:

 

 

 

From DSC:
Reviewing the article below made me think of 2 potential additions to the Learning & Development Groups/Departments out there:

  1. Help people build their own learning ecosystems
  2. Design, develop, and implement workbots for self-service

 



 

Chatbots Poised to Revolutionize HR — from by Pratibha Nanduri

Excerpt:

Self-service is becoming an increasingly popular trend where people want to perform their tasks without needing help or input from anyone else. The increasing popularity of this trend is mainly attributed to the increasing use of computers and mobile devices to electronically manage all kinds of tasks.

As employee tolerance for downtime reduces and preferences for mobility increases, the bureaucracy which exists in managing everyday HR related tasks in the workplace will also have to be replaced. A large number of companies have still not automated even their basic HR services such as handling inquiries about holidays and leaves. Employees in such organizations still have to send their query and then wait for HR to respond.

As the number of employees goes up in an organization, the time taken by HR managers to respond to mundane admin tasks also increases. This leaves very little time for the HR manager to focus on strategic HR initiatives.

Chatbots that are powered by AI and machine learning are increasingly being used to automate mundane and repetitive tasks. They can also be leveraged in HR to simulate intelligent SMS-based conversations between employees and HR team members to automate basic HR tasks.

 



 

 

Campus Technology 2017: Virtual Reality Is More Than a New Medium — from edtechmagazine.com by Amy Burroughs
Experts weigh in on the future of VR in higher education.

Excerpts:

“It’s actually getting pretty exciting,” Georgieva said, noting that legacy companies and startups alike have projects in the works that will soon be on the market. Look for standalone, wireless VR headsets later this year from Facebook and Google.

“I think it’s going to be a universal device,” he said. “Eventually, we’ll end up with some kind of glasses where we can just dial in the level of immersion that we want.”

— Per Emery Craig, at Campus Technology 2017 Conference


“Doing VR for the sake of VR makes no sense whatsoever,” Craig said. “Ask when does it make sense to do this in VR? Does a sense of presence help this, or is it better suited to traditional media?”

 

 

Virtual Reality: The User Experience of Story — from blogs.adobe.com

Excerpt:

Solving the content problems in VR requires new skills that are only just starting to be developed and understood, skills that are quite different from traditional storytelling. VR is a nascent medium. One part story, one part experience. And while many of the concepts from film and theater can be used, storytelling through VR is not like making a movie or a play.

In VR, the user has to be guided through an experience of a story, which means many of the challenges in telling a VR story are closer to UX design than anything from film or theater.

Take the issue of frameless scenes. In a VR experience, there are no borders, and no guarantees where a user will look. Scenes must be designed to attract user attention, in order to guide them through the experience of a story.

Sound design, staging cues, lighting effects, and movement can all be used to draw a user’s attention.

However, it’s a fine balance between attraction to distraction.

“In VR, it’s easy to overwhelm the user. If you see a flashing light and in the background, you hear a sharp siren, and then something moves, you’ve given the user too many things to understand,” says Di Dang, User Experience Lead at POP, Seattle. “Be intentional and deliberate about how you grab audience attention.”

 

VR is a storytelling superpower. No other medium has the quite the same potential to create empathy and drive human connection. Because viewers are for all intents and purposes living the experience, they walk away with that history coded into their memory banks—easily accessible for future responses.

 

 

 

Google’s latest VR experiment is teaching people how to make coffee — from techradar.com by Parker Wilhelm
All in a quest to see how effective learning in virtual reality is

Excerpt:

Teaching with a simulation is no new concept, but Google’s Daydream Labs wants to see exactly how useful virtual reality can be for teaching people practical skills.

In a recent experiment, Google ran a simulation of an interactive espresso machine in VR. From there, it had a group of people try their virtual hand at brewing a cup of java before being tasked to make the real thing.

 

 



 

Addendum on 7/26/17:

 



 

 

 

4 ways augmented reality could change corporate training forever –from by Jay Samit

Excerpt:

In the coming years, machine learning and augmented reality will likely take both educational approaches to the next level by empowering workers to have the latest, most accurate information available in context, when and where they need it most.

Here are four ways that digital reality can revolutionize corporate training…

 

…augmented reality (AR) is poised not only to address issues faced by our aging workforce, but to fundamentality increase productivity by changing how all employees are trained in the future.

 

 

 

 

Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience — from technologyreview.com by Jamie Condliffe
Inquisitiveness and imagination will be hard to create any other way.

Excerpt:

Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014. Since then, his company has wiped the floor with humans at the complex game of Go and begun making steps towards crafting more general AIs.

But now he’s come out and said that be believes the only way for artificial intelligence to realize its true potential is with a dose of inspiration from human intellect.

Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. But different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

 

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. 

 

From DSC:
Glory to God! I find it very interesting to see how people and organizations — via very significant costs/investments — keep trying to mimic the most amazing thing — the human mind. Turns out, that’s not so easy:

But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem…

Therefore, some scripture comes to my own mind here:

Psalm 139:14 New International Version (NIV)

14 I praise you because I am fearfully and wonderfully made;
    your works are wonderful,
    I know that full well.

Job 12:13 (NIV)

13 “To God belong wisdom and power;
    counsel and understanding are his.

Psalm 104:24 (NIV)

24 How many are your works, Lord!
    In wisdom you made them all;
    the earth is full of your creatures.

Revelation 4:11 (NIV)

11 “You are worthy, our Lord and God,
    to receive glory and honor and power,
for you created all things,
    and by your will they were created
    and have their being.”

Yes, the LORD designed the human mind by His unfathomable and deep wisdom and understanding.

Glory to God!

Thanks be to God!

 

 

Amazon’s Alexa passes 15,000 skills, up from 10,000 in February — from techcrunch.com by Sarah Perez

Excerpt:

Amazon’s Alexa voice platform has now passed 15,000 skills — the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. The figure is up from the 10,000 skills Amazon officially announced back in February, which had then represented a 3x increase from September.

The new 15,000 figure was first reported via third-party analysis from Voicebot, and Amazon has now confirmed to TechCrunch that the number is accurate.

According to Voicebot, which only analyzed skills in the U.S., the milestone was reached for the first time on June 30, 2017. During the month of June, new skill introductions increased by 23 percent, up from the less than 10 percent growth that was seen in each of the prior three months.

The milestone also represents a more than doubling of the number of skills that were available at the beginning of the year, when Voicebot reported there were then 7,000 skills. That number was officially confirmed by Amazon at CES.

 

 


From DSC:
Again, I wonder…what are the implications for learning from this new, developing platform?


 

 

Robots and AI are going to make social inequality even worse, says new report — from theverge.com by
Rich people are going to find it easier to adapt to automation

Excerpt:

Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But what’s less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.

The are a number of reasons for this, say the report’s authors, including the ability of richer individuals to re-train for new jobs; the rising importance of “soft skills” like communication and confidence; and the reduction in the number of jobs used as “stepping stones” into professional industries.

For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.

 

Re-training for new jobs will also become a crucial skill, and it’s individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.

 

 

From DSC:
I can’t emphasize this enough. There are dangerous, tumultuous times ahead if we can’t figure out ways to help ALL people within the workforce reinvent themselves quickly, cost-effectively, and conveniently. Re-skilling/up-skilling ourselves is becoming increasingly important. And I’m not just talking about highly-educated people. I’m talking about people whose jobs are going to be disappearing in the near future — especially people whose stepping stones into brighter futures are going to wake up to a very different world. A very harsh world.

That’s why I’m so passionate about helping to develop a next generation learning platform. Higher education, as an industry, has some time left to figure out their part/contribution out in this new world. But the window of time could be closing, as another window of opportunity / era could be opening up for “the next Amazon.com of higher education.”

It’s up to current, traditional institutions of higher education as to how much they want to be a part of the solution. Some of the questions each institution ought to be asking are:

  1. Given our institutions mission/vision, what landscapes should we be pulse-checking?
  2. Do we have faculty/staff/members of administration looking at those landscapes that are highly applicable to our students and to their futures? How, specifically, are the insights from those employees fed into the strategic plans of our institution?
  3. What are some possible scenarios as a result of these changing landscapes? What would our response(s) be for each scenario?
  4. Are there obstacles from us innovating and being able to respond to the shifting landscapes, especially within the workforce?
  5. How do we remove those obstacles?
  6. On a scale of 0 (we don’t innovate at all) to 10 (highly innovative), where is our culture today? Where do we hope to be 5 years from now? How do we get there?

…and there are many other questions no doubt. But I don’t think we’re looking into the future nearly enough to see the massive needs — and real issues — ahead of us.

 

 

The report, which was carried out by the Boston Consulting Group and published this Wednesday [7/12/17], looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.

 

 

 

 
© 2017 | Daniel Christian