Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

New Google Earth has exciting features for teachers — from thejournal.com by Richard Chang

Excerpt:

Google has recently released a brand new version of Google Earth for both Chrome and Android. This new version has come with a slew of nifty features teachers can use for educational purposes with students in class. Following is a quick overview of the most fascinating features…

 

 

 

 

 

 

Expect voice, VR and AR to dominate UX design — from forbes.com by the Forbes Technology Council

Excerpt:

User interfaces have come a long way since the design of typewriters to prevent people from typing too quickly and then jamming the device. Current technology has users viewing monitors and wiggling mouse, or tapping on small touchscreens to activate commands or to interact with a virtual keyboard. But is this the best method of interaction?

Designers are asking themselves if it [is] better to talk to a mobile device to get information, or should a wearable vibrate and then feed information into an augmented reality display. Is having an artificial intelligence modify an interface on the fly, depending on how a user interacts, the best course of action for applications or websites? And how human should the AIs’ interaction be with users?

Eleven experts on the Forbes Technology Council offer their predictions on how UX design will be changing in the next few years. Here’s what they have to say…

 

 

 

 

Chatbots: The next big thing — from dw.com
Excerpt:

More and more European developers are discovering the potential of chatbots. These mini-programs interact automatically with users and could be particularly useful in areas like online shopping and news delivery. The potential of chatbots is diverse. These tiny programs can do everything from recognizing customers’ tastes to relaying the latest weather forecast. Berlin start-up Spectrm is currently devising bots that deliver customized news. Users can contact the bot via Facebook Messenger, and receive updates on topics that interest them within just a few seconds.

 

 

MyPrivateTutor releases chatbot for finding tutors — from digitaljournal.com
MyPrivateTutor, based in Kolkata, matches tutors to students using proprietary machine learning algorithms

Excerpt:

Using artificial intelligence, the chatbot helps us reach a wider segment of users who are still not comfortable navigating websites and apps but are quite savvy with messaging apps”, said Sandip Kar, co-founder & CEO of MyPrivateTutor (www.myprivatetutor.com), an online marketplace for tutors, has released a chatbot for helping students and parents find tutors, trainers, coaching classes and training institutes near them.

 

 

Story idea: Covering the world of chatbots — from businessjournalism.org by Susan Johnston Taylor

Excerpt:

Chatbots, computer programs designed to converse with humans, can perform all sorts of activities. They can help users book a vacation, order a pizza, negotiate with Comcast or even communicate with POTUS. Instead of calling or emailing a representative at the company, consumers chat with a robot that uses artificial intelligence to simulate natural conversation. A growing number of startups and more established companies now use them to interact with users via Facebook Messenger, SMS, chat-specific apps such as Kik or the company’s own site.

To cover this emerging business story, reporters can seek out companies in their area that use chatbots, or find local tech firms that are building them. Local universities may have professors or other experts available who can provide big-picture context, too. (Expertise Finder can help you identify professors and their specific areas of study.)

 

 

How chatbots are addressing summer melt for colleges — from ecampusnews.com

Excerpt:

AdmitHub, an edtech startup which builds conversational artificial intelligence (AI) chatbots to guide students on the path to and through college, has raised $2.95 million in seed funding.

 

 

Why higher education chatbots will take over university mobile apps — from blog.admithub.com by Kirk Daulerio

Excerpt (emphasis DSC):

Chatbots are the new apps and websites combined
Chatbots are simple, easy to use, and present zero friction. They exist on the channels that people are most familiar with like Messenger, Twitter, SMS text message, Kik, and expanding onto other messaging applications. Unlike apps, bots don’t take up space, users don’t have to take time to get familiar with a new user interface, and bots will give you an instant reply. The biggest difference with chatbots compared to apps and websites is that they use language as the main interface. Websites and apps have to be searched and clicked, while bots and people use language, the most natural interface, to communicate and inform.

 

 


From DSC:
I think messaging-based chatbots will definitely continue to grow in usage — in numerous industries, including higher education. But I also think that the human voice — working in conjunction with technologies that provide natural language processing (NLP) capabilities — will play an increasingly larger role in how we interface with our devices. Whether it’s via a typed/textual message or whether it’s via a command or a query relayed by the human voice, working with bots needs to be on our radars. These conversational messaging agents are likely to be around for a while.

 


 

Addendum:

 

 

 

From DSC:
After seeing the sharp interface out at Adobe (see image below), I’ve often thought that there should exist a similar interface and a similar database for educators, trainers, and learners to use — but the database would address a far greater breadth of topics to teach and/or learn about.  You could even select beginner, intermediate, or advanced levels (grade levels might work here as well).

Perhaps this is where artificial intelligence will come in…not sure.

 

 

 

 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

LinkedIn ProFinder expands nationwide to help you hire freelancers — from blog.linkedin.com

Excerpt:

The freelance economy is on the rise. In fact, the number of freelancers on LinkedIn has grown by nearly 50% in just the past five years. As the workforce evolves, we, too, are evolving to ensure we’re creating opportunity for the expanding sector of professionals looking for independent, project-based work in place of the typical 9 to 5 profession.

Last October, we began piloting a brand new platform in support of this very endeavor and today, we’re excited to announce its nationwide availability. Introducing LinkedIn ProFinder, a LinkedIn marketplace that connects consumers and small businesses looking for professional services – think Design, Writing and Editing, Accounting, Real Estate, Career Coaching – with top quality freelance professionals best suited for the job.

 

 

Also see:

 

linkedin-profinder-aug2016

 

Also see:

 

40percentfreelancersby2020-quartz-april2013

 

Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource

 

The future of interaction is multimodal.

 

Excerpts:

The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.

Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?

Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.

 

 

Forget about buttons — think actions.

 

 

Eliminate the need for a cursor as feedback, but provide an alternative.

 

 

 

 
© 2017 | Daniel Christian