Faculty Innovation Toolkit — from campustechnology.com by Leila Meyer and Dian Schaffhauser
Excerpt:
- 15 Sites for Free Digital Textbooks
- 12 Tips for Gamifying a Course
- 10 Tools for More Interactive Videos
- 4 Ways to Use Social Media for Learning
Questions from DSC:
As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot? What types of positions created it? Who all could benefit from it? What other platforms could these technologies be integrated into? Besides the home, where else might we find these types of devices?
Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.
Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.
Or how might students learn about the myriad of technologies involved with IBM’s Watson? What courses are out there today that address this type of thing? Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?
Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.
Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource
The future of interaction is multimodal.
Excerpts:
The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.
…
Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?
…
Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.
Forget about buttons — think actions.
Eliminate the need for a cursor as feedback, but provide an alternative.
Creators of Siri reveal first public demo of AI assistant “Viv” — from seriouswonder.com by B.J. Murphy
Excerpts:
When it comes to AI assistants, a battle has been waged between different companies, with assistants like Siri, Cortana, and Alexa at the forefront of the battle. And now a new potential competitor enters the arena.
During a 20 minute onstage demo at Disrupt NYC, creators of Siri Dag Kittlaus and Adam Cheyer revealed Viv – a new AI assistant that makes Siri look like a children’s toy.
“Viv is an artificial intelligence platform that enables developers to distribute their products through an intelligent, conversational interface. It’s the simplest way for the world to interact with devices, services and things everywhere. Viv is taught by the world, knows more than it is taught, and learns every day.”
From DSC:
I saw a posting at TechCrunch.com the other day — The Information Age is over; welcome to the Experience Age. In terms of why I’m mentioning that article here, the content of that article is not what’s as relevant here as the title of the article. An interesting concept…and probably spot on; with ramifications for numerous types of positions, skillsets, and industries from all over the globe.
Also see:
Addendum on 5/12/16:
A New Morning — by Magic Leap; posted on 4/19/16
Welcome to a new way to start your day. Shot directly through Magic Leap technology on April 8, 2016 without use of special effects or compositing.
Also see:
8ninths Develops “Holographic Workstation”™ for Citi Traders using Microsoft HoloLens — from 8ninths.com
Excerpt:
San Francisco – March 30, 2016 – 8ninths was named today by Microsoft Corporation as one of seven companies chosen for the Microsoft HoloLens Agency Readiness Program, and will showcase their “Holographic Workstation”™ prototype, designed and engineered for Citi, this week at Microsoft Build 2016. The Holographic Workstation™ increases efficiency by using the Microsoft HoloLens platform to create 3D holograms of real-time financial data. A three-tiered system of dynamically updated and interactive information enables traders to view, process, and interact with large amounts of abstract data in a combined 3D and 2D environment. The physical workstation integrates tablet screen space, 3D holographic docking space, keyboard, mouse, gaze, gesture, voice input, and existing Citi devices and workflows.
HoloLens could get into finance with this VR workstation — from mashable.com by Lance Ulanoff
Excerpt:
8Ninths Cofounder and CEO Adam Sheppard told me they looked at the pain points of existing workstations and then drew inspiration from how, for example, they’d seen Microsoft and NASA solve 3D problems by embedding information in 2D and real environments.
The result is 8ninths’ Holographic Workstation, which was announced Wednesday at Microsoft’s Build 2016 developers conference. It’s a true blend of the real world (a physical day trader desk with a pair of real screens and a Surface Pro 4 in the middle) and a host of live, financial visualizations spread above the physical desk, including a cloud-like work area floating above the top shelf.
Also see the Vimeo video on this:
Microsoft HoloLens used as the basis for a cool holographic stock trading workstation — from windowscentral.com by John Callaham
\
Winner revealed for Microsoft’s HoloLens App Competition — from vrfocus.com by Peter Graham
Excerpt:
Out of the thousands of ideas entered, Airquarium, Grab the Idol and Galaxy Explorer were the three that made it through. Out of those the eventual winner was Galaxy Explorer with a total of 58 per cent of the votes. The app aims to give users the ability to wander the Milky Way and learn about our galaxy. Navigating through the stars and landing on the myriad of planets that are out there.
Also see:
Virtual Reality in 2016: What to expect from Google, Facebook, HTC and others in 2016 — from tech.firstpost.com by Naina Khedekar
Excerpt:
Many companies have made their intentions for virtual reality clear. Let’s see what they are up to in 2016.
Also see:
Somewhat related, but in the AR space:
Except:
Augmented reality(AR) has continued to gain momentum in the educational landscape over the past couple of years. These educators featured below have dove in head first using AR in their classrooms and schools. They continue to share excellent resources to help educators see how augmented reality can engage students and deepen understanding.
Intel launches x-ray-like glasses that allow wearers to ‘see inside’ objects — from theguardian.com by
Smart augmented reality helmet allows wearers to overlay maps, schematics and thermal images to effectively see through walls, pipes and other solid objects
Excerpt:
Unlike devices such as HoloLens or Google Glass, which have been marketed as consumer devices, the Daqri Smart Helmet is designed with industrial use in mind. It will allow the wearer to effectively peer into the workings of objects using real-time overlay of information, such as wiring diagrams, schematics and problem areas that need fixing.
From DSC:
Currently, you can add interactivity to your digital videos. For example, several tools allow you to do this, such as:
So I wonder…what might interactivity look like in the near future when we’re talking about viewing things in immersive virtual reality (VR)-based situations? When we’re talking about videos made using cameras that can provide 360 degrees worth of coverage, how are we going to interact with/drive/maneuver around such videos? What types of gestures and/or input devices, hardware, and software are we going to be using to do so?
What new forms of elearning/training/education will we have at our disposal? How will such developments impact instructional design/designers? Interaction designers? User experience designers? User interface designers? Digital storytellers?
Hmmm…
The forecast? High engagement, interesting times ahead.
Also see: