Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #VoiceRecognition |  #CognitiveComputing
#SmartClassrooms  |  #LearningSpaces  |#Collaboration |  #Meetings 


 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

Microsoft Accelerates HoloLens V3 Development, Sidesteps V2 — from thurrott.com by Brad Sams

 

 

Excerpt:

Back when the first version of Hololens came out, Microsoft created a roadmap that highlighted several release points for the product. This isn’t unusual, you start with the first device, second generation devices are typically smaller and more affordable and then with version three you introduce new technology that upgrades the experience; this is a standard process path in the technology sector. Microsoft, based on my sources, is sidelining what was going to be version two of HoloLens and is going straight to version three.

While some may see this as bad news that a cheaper version of HoloLens will not arrive this year or likely next year, by accelerating the technology that will bring us the expanded field of view with a smaller footprint, the new roadmap allows for a device that is usable in everyday life to arrive sooner.

Microsoft is playing for the long-term with this technology to make sure they are well positioned for the next revolution in computing. By adjusting their path today for HoloLens, they are making sure that they remain the segment leader for years to come.

 

 

 
 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Top 200 Tools for Learning 2016: Overview — from c4lpt.co.uk by Jane Hart

Also see Jane’s:

  1. TOP 100 TOOLS FOR PERSONAL & PROFESSIONAL LEARNING (for formal/informal learning and personal productivity)
  2. TOP 100 TOOLS FOR WORKPLACE LEARNING (for training, e-learning, performance support and social collaboration
  3. TOP 100 TOOLS FOR EDUCATION (for use in primary and secondary (K12) schools, colleges, universities and adult education.)

 

top200tools-2016-jane-hart

 

Also see Jane’s “Best of Breed 2016” where she breaks things down into:

  1. Instructional tools
  2. Content development tools
  3. Social tools
  4. Personal tools

 

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
Interactive video — a potentially very powerful medium to use, especially for blended and online-based courses or training-related materials! This interactive piece from Heineken is very well done, even remembering how you answered and coming up with their evaluation of you from their 12-question “interview.”

But notice again, a TEAM of specialists are needed to create such a piece. Neither a faculty member, a trainer, nor an instructional designer can do something like this all on their own. Some of the positions I could imagine here are:

  • Script writer(s)
  • Editor(s)
  • Actors and actresses
  • Those skilled in stage lighting and sound / audio recording
  • Digital video editors
  • Programmers
  • Graphic designers
  • Web designers
  • Producers
  • Product marketers
  • …and perhaps others

This is the kind of work that I wish we saw more of in the world of online and blended courses!  Also, I appreciated their use of humor. Overall, a very engaging, fun, and informative piece!

 

heineken-interactive-video-cover-sep2016

 

heineken-interactive-video-first-sep2016

 

heineken-interactive-video0-sep2016

 

heineken-interactive-video1-sep2016

 

heineken-interactive-video2-sep2016

 

heineken-interactive-video3-sep2016

 

 

 
© 2017 | Daniel Christian