Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remotely or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #VoiceRecognition |  #CognitiveComputing
#SmartClassrooms  |  #LearningSpaces  |#Collaboration |  #Meetings 


 

 

 

From DSC:
Can you imagine this as a virtual reality or a mixed reality-based app!?! Very cool.

This resource is incredible on multiple levels:

  • For their interface/interaction design
  • For their insights and ideas
  • For their creativity
  • For their graphics
  • …and more!

 

 

 

 

 

 

 

 

 

 

Microsoft Accelerates HoloLens V3 Development, Sidesteps V2 — from thurrott.com by Brad Sams

 

 

Excerpt:

Back when the first version of Hololens came out, Microsoft created a roadmap that highlighted several release points for the product. This isn’t unusual, you start with the first device, second generation devices are typically smaller and more affordable and then with version three you introduce new technology that upgrades the experience; this is a standard process path in the technology sector. Microsoft, based on my sources, is sidelining what was going to be version two of HoloLens and is going straight to version three.

While some may see this as bad news that a cheaper version of HoloLens will not arrive this year or likely next year, by accelerating the technology that will bring us the expanded field of view with a smaller footprint, the new roadmap allows for a device that is usable in everyday life to arrive sooner.

Microsoft is playing for the long-term with this technology to make sure they are well positioned for the next revolution in computing. By adjusting their path today for HoloLens, they are making sure that they remain the segment leader for years to come.

 

 

 
 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Top 200 Tools for Learning 2016: Overview — from c4lpt.co.uk by Jane Hart

Also see Jane’s:

  1. TOP 100 TOOLS FOR PERSONAL & PROFESSIONAL LEARNING (for formal/informal learning and personal productivity)
  2. TOP 100 TOOLS FOR WORKPLACE LEARNING (for training, e-learning, performance support and social collaboration
  3. TOP 100 TOOLS FOR EDUCATION (for use in primary and secondary (K12) schools, colleges, universities and adult education.)

 

top200tools-2016-jane-hart

 

Also see Jane’s “Best of Breed 2016” where she breaks things down into:

  1. Instructional tools
  2. Content development tools
  3. Social tools
  4. Personal tools

 

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
Interactive video — a potentially very powerful medium to use, especially for blended and online-based courses or training-related materials! This interactive piece from Heineken is very well done, even remembering how you answered and coming up with their evaluation of you from their 12-question “interview.”

But notice again, a TEAM of specialists are needed to create such a piece. Neither a faculty member, a trainer, nor an instructional designer can do something like this all on their own. Some of the positions I could imagine here are:

  • Script writer(s)
  • Editor(s)
  • Actors and actresses
  • Those skilled in stage lighting and sound / audio recording
  • Digital video editors
  • Programmers
  • Graphic designers
  • Web designers
  • Producers
  • Product marketers
  • …and perhaps others

This is the kind of work that I wish we saw more of in the world of online and blended courses!  Also, I appreciated their use of humor. Overall, a very engaging, fun, and informative piece!

 

heineken-interactive-video-cover-sep2016

 

heineken-interactive-video-first-sep2016

 

heineken-interactive-video0-sep2016

 

heineken-interactive-video1-sep2016

 

heineken-interactive-video2-sep2016

 

heineken-interactive-video3-sep2016

 

 

 

LinkedIn ProFinder expands nationwide to help you hire freelancers — from blog.linkedin.com

Excerpt:

The freelance economy is on the rise. In fact, the number of freelancers on LinkedIn has grown by nearly 50% in just the past five years. As the workforce evolves, we, too, are evolving to ensure we’re creating opportunity for the expanding sector of professionals looking for independent, project-based work in place of the typical 9 to 5 profession.

Last October, we began piloting a brand new platform in support of this very endeavor and today, we’re excited to announce its nationwide availability. Introducing LinkedIn ProFinder, a LinkedIn marketplace that connects consumers and small businesses looking for professional services – think Design, Writing and Editing, Accounting, Real Estate, Career Coaching – with top quality freelance professionals best suited for the job.

 

 

Also see:

 

linkedin-profinder-aug2016

 

Also see:

 

40percentfreelancersby2020-quartz-april2013

 

DanielChristian2-ARVRMRCampusTechArticle-8-16-16

 

From Dreams to Realities: AR/VR/MR in Education | A Q&A with Daniel Christian — from campustechnology.com by Mary Grush; I’d like to thank Jason VanHorn for his contributions to this article

Excerpt:

Grush: Is there a signpost you might point to that would indicate that there’s going to be more product development in AR/VR/MR?

Christian: There’s a significant one. Several major players — with very deep pockets — within the corporate world are investing in new forms of HCI, including Microsoft, Google, Apple, Facebook, Magic Leap, and others. In fact, according to an article on engadget.com from 6/16/16, “Magic Leap has amassed an astounding $1.39 billion in funding without shipping an actual product.” So to me, it’s just not likely that the billions of dollars being invested in a variety of R&D-related efforts are simply going to evaporate without producing any impactful, concrete products or services. There are too many extremely smart, creative people working on these projects, and they have impressive financial backing behind their research and product development efforts. So, I think we can expect an array of new choices in AR/VR/MR.

Just the other day I was talking to Jason VanHorn, an associate professor in our geology, geography, and environmental studies department. After finishing our discussion about a particular learning space and how we might implement active learning in it, we got to talking about mixed reality. He related his wonderful dreams of being able to view, manipulate, maneuver through, and interact with holographic displays of our planet Earth.

When I mentioned a video piece done by Case Western and the Cleveland Clinic that featured Microsoft’s Hololens technology, he knew exactly what I was referring to. But this time, instead of being able to drill down through the human body to review, explore, and learn about the various systems composing our human anatomy, he wanted to be able to drill down through the various layers of the planet Earth. He also wanted to be able to use gestures to maneuver and manipulate the globe — turning the globe to just the right spot before using a gesture to drill down to a particular place.

 

 

 

 

Uploaded on Jul 21, 2016

 

Description:
A new wave of compute technology -fueled by; big data analytics, the internet of things, augmented reality and so on- will change the way we live and work to be more immersive and natural with technology in the role as partner.

 

 

Also see:

Excerpt:

We haven’t even scratched the surface of the things technology can do to further human progress.  Education is the next major frontier.  We already have PC- and smartphone-enabled students, as well as tech-enabled classrooms, but the real breakthrough will be in personalized learning.

Every educator divides his or her time between teaching and interacting.  In lectures they have to choose between teaching to the smartest kid in the class or the weakest.  Efficiency (and reality) dictates that they must teach to the theoretical median, meaning some students will be bored and some will still struggle.  What if a digital assistant could step in to personalize the learning experience for each student, accelerating the curriculum for the advanced students and providing greater extra support for those that need more help?  The digital assistant could “sense” and “learn” that Student #1 has already mastered a particular subject and “assign” more advanced materials.  And it could provide additional work to Student #2 to ensure that he or she was ready for the next subject.  Self-paced learning to supplant and support classroom learning…that’s the next big advancement.

 

 

 

 

Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 

HolographicStorytellingJWT-June2016

HolographicStorytellingJWT-2-June2016

 

Holographic storytelling — from jwtintelligence.com by Jade Perry

Excerpt (emphasis DSC):

The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.

New Dimensions in Testimony’ is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.

Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from Conscience Display, viewers were able to ask Gutter’s holographic image questions that triggered relevant responses.

 

 

From DSC:
I wonder…is this an example of a next generation, visually-based chatbot*?

With the growth of artificial intelligence (AI), intelligent systems, and new types of human computer interaction (HCI), this type of concept could offer an on-demand learning approach that’s highly engaging — and accessible from face-to-face settings as well as from online-based learning environments. (If it could be made to take in some of the context of a particular learner and where a learner is in the relevant Zone of Proximal Development (via web-based learner profiles/data), it would be even better.)

As an aside, is this how we will obtain
customer service from the businesses of the future? See below.

 


 

 

*The complete beginner’s guide to chatbots — from chatbotsmagazine.com by Matt Schlicht
Everything you need to know.

Excerpt (emphasis DSC):

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

What is a chatbot?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.

Examples of chatbots
Weather bot. Get the weather whenever you ask.
Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
Life advice bot. I’ll tell it my problems and it helps me think of solutions.
Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.

 

 
© 2024 | Daniel Christian