Just like a video game, users of the GPS need only follow green arrows projected as if onto the road in front of the car providing visual directions. More importantly, because the system displays on the windscreen, it does not require a cumbersome headset or eyewear worn by the driver. It integrates directly into the dashboard of the car.
The system also recognizes simple voice and gesture commands from the driver — eschewing turning of knobs or pressing buttons. The objective of the system is to allow the driver to spend more time paying attention to the road, with hands on the wheel. Many modern-day onboard GPS systems also recognize voice commands but require the driver to glance over at a screen.
Viro Media is supplying a platform of their own and their hope is to be the simplest experience where companies can code once and have their content available on multiple mobile platforms. We chatted with Viro Media CEO Danny Moon about the tool and what creators can expect to accomplish with it.
Virtual reality can transport us to new places, where we can experience new worlds and people, like no other. It is a whole new medium poised to change the future of gaming, education, health care and enterprise. Today we are starting a new series to help you discover what this new technology promises. With the help of our friends at RadioPublic, we are curating a quick library of podcasts related to virtual reality technology.
AUSTIN (KXAN) — Virtual reality is no longer reserved for entertainment and gamers, its helping solve real-world problems. Some of the latest advancements are being demonstrated at South by Southwest.
Dr. Skip Rizzo directs the Medical Virtual Reality Lab at the University of Southern California’s Institute for Creative Technologies. He’s helping veterans who suffer from post-traumatic stress disorder (PTSD). He’s up teamed with Dell to develop and spread the technology to more people.
At NVIDIA Jetson TX2 launch [on March 7, 2017], in San Francisco, [NVIDIA] showed how the platform not only accelerates AI computing, graphics and computer vision, but also powers the workflows used to create VR content. Artec 3D debuted at the event the first handheld scanner offering real-time 3D capture, fusion, modeling and visualization on its own display or streamed to phones and tablets.
Project Empathy A collection of virtual reality experiences that help us see the world through the eyes of another
Excerpt:
Benefit Studio’s virtual reality series, Project Empathy is a collection of thoughtful, evocative and surprising experiences by some of the finest creators in entertainment, technology and journalism.
Each film is designed to create empathy through a first-person experience–from being a child inside the U.S. prison system to being a widow cast away from society in India. Individually, each of the films in this series presents its filmmaker’s unique vision, portraying an intimate experience through the eyes of someone whose story has been lost or overlooked and yet is integral to the larger story of our global society. Collectively, these creatively distinct films weave together a colorful tapestry of what it means to be human today.
Most introductory geology professors teach students about earthquakes by assigning readings and showing diagrams of tectonic plates and fault lines to the class. But Paul Low is not most instructors.
“You guys can go wherever you like,” he tells a group of learners. “I’m going to go over to the epicenter and fly through and just kind of get a feel.”
Low is leading a virtual tour of the Earth’s bowels, directly beneath New Zealand’s south island, where a 7.8 magnitude earthquake struck last November. Outfitted with headsets and hand controllers, the students are “flying” around the seismic hotbed and navigating through layers of the Earth’s surface.
Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education.
“As virtual reality moves more towards the mainstream through the development of new, more affordable consumer technologies, a way needs to be found for students to translate what they learn in academic situations into careers within the industry,” says Frankie Cavanagh, a lecturer at Northumbria University. He founded a company called Somniator last year with the aim not only of developing VR games, but to provide a bridge between higher education and the technology sector. Over 70 students from Newcastle University, Northumbria University and Gateshead College in the UK have been placed so far through the program, working on real games as part of their degrees and getting paid for additional work commissioned.
Working with VR already translates into an extraordinarily diverse range of possible career paths, and those options are only going to become even broader as the industry matures in the next few years.
Customer service just got a lot more interesting. Construction equipment manufacturer Caterpillar just announced official availability of what they’re calling the CAT LIVESHARE solution to customer support, which builds augmented reality capabilities into the platform. They’ve partnered with Scope AR, a company who develops technical support and training documentation tools using augmented reality. The CAT LIVESHARE support system uses Scope AR’s Remote AR software as the backbone.
The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).
From DSC: Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies. The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.
From DSC: I have attended theNext Generation Learning Spaces Conferencefor the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.
For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.
The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.
Key takeaways for the panel discussion:
Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
An update on the state of the approaching ed tech landscape
Creative, new thinking: What might our next generation learning environments look like in 5-10 years?
I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check outthe conferenceandregister soon to take advantage of the early bird discounts.
Prediction 1: Mixed reality rooms will begin to replace home theater. As Eric Johnson summed up last year in Recode, “to borrow an example from Microsoft’s presentation at the gaming trade show E3, you might be looking at an ordinary table, but see an interactive virtual world from the video game Minecraft sitting on top of it. As you walk around, the virtual landscape holds its position, and when you lean in close, it gets closer in the way a real object would.” Can a “holographic” cinema experience really be that far off? With 3D sound? And Smell-O-Vision?
Prediction 13. Full-wall video with multiscreens will appear in the home. Here’s something interesting: The first three predictions in this set of 10 all have an origin in commercial applications. This one — think of it more as digital signage than sports bar — will allow the user to have access to a wall that includes a weather app, a Twitter feed, a Facebook page, the latest episode of Chopped, a Cubs game, and literally anything else a member — or members — of the family are interested in. The unintended consequences: some 13-year-old will one day actually utter the phrase, “MOM! Can you minimize your Snapchat already!?!”
Prediction 22: Intelligent glass will be used as a control interface, entertainment platform, comfort control, and communication screen. Gordon van Zuiden says, “We live in a world of touch, glass-based icons. Obviously the phone is the preeminent example — what if all the glass that’s around you in the house could have some level of projection so that shower doors, windows, and mirrors could be practical interfaces?” Extend that smart concept to surfaces that don’t just respond to touch, but to gesture and voice — and now extend that to surfaces outside the home.
Prediction 28: User-programmable platforms based on interoperable systems will be the new control and integration paradigm. YOU: “Alexa, please find Casablanca on Apple TV and send it to my Android phone. And order up a pizza.”
Prediction 34. Consumer sensors will increase in sensitivity and function. The Internet of Things will become a lot like Santa: “IoT sees you when you’re sleeping/IoT knows when you’re awake/IoT knows if you’ve been bad or good…”
Prediction 47. Policy and technology will drive the security concerns over internet and voice connected devices. “When you add the complexity of ‘always on, always listening’ connected devices … keeping the consumer’s best interests in mind might not always be top of mind for corporations [producing these devices],” notes Maniscalco. “[A corporation’s] interest is usually in profits.” Maniscalco believes that a consumer push for legislation on the dissemination of the information a company can collect will be the “spark that ignites true security and privacy for the consumer.”
Prediction 53. The flexible use of the light socket: Lighting becomes more than lighting. Think about the amount of coverage — powered coverage — that the footprint of a home’s network of light sockets provides. Mike Maniscalco of Ihiji has: “You can use that coverage and power to do really interesting things, like integrate sensors into the lighting. Track humidity, people’s movements, change patterns based on what’s happening in that room.”
Prediction 64. Voice and face recognition and authentication services become more ubiquitous. Yes, your front door will recognize your face — other people’s, too. “Joe Smith comes to your door, you get a text message without having to capture video, so that’s a convenience,” notes Jacobson.
Soon, hundreds of millions of mobile users in China will have direct access to an augmented reality smartphone platform on their smartphones.
Baidu, China’s largest search engine, unveiled an AR platform today called DuSee that will allow China’s mobile users the opportunity to test out smartphone augmented reality on their existing devices. The company also detailed that they plan to integrate the technology directly into their flagship apps, including the highly popular Mobile Baidu search app.
From DSC: With “a billion iOS devices out in the world“, I’d say that’s a good, safe call…at least for one of the avenues/approaches via which AR will be offered.
Recently it’s appeared that augmented reality (AR) is gaining popularity as a professional platform, with Visa testing it as a brand new e-commerce solution and engineering giant Aecom piloting a project that will see the technology used in their construction projects across three continents. Now, it’s academia’s turn with Deakin University in Australia announcing that it plans to use the technology as a teaching tool in its medicine and engineering classrooms.
As reported by ITNews, AR technology will be introduced to Deakin University’s classes from December, with the first AR apps to be used during the university’s summer programme which runs from November to March, before the technology is distributed more widely in the first semester of 2017.
Creating realistic interactions with objects and people in virtual reality is one of the industry’s biggest challenges right now, but what about for augmented reality?
That’s an area that researchers from Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently made strides in with what they call Interactive Dynamic Video (IDV). First designed with video in mind, PhD student Abe Davis has created a unique concept that could represent a way to not only interact with on-screen objects, but for those objects to also realistically react to the world around them.
If that sounds a little confusing, then we recommend taking a look at…
Venice Architecture Biennale 2016: augmented reality will revolutionise the architecture and construction industries according to architect Greg Lynn, who used Microsoft HoloLens to design his contribution to the US Pavilion at the Venice Biennale (+ movie).
.
Augmented Reality In Healthcare Will Be Revolutionary — from medicalfuturist.com Augmented reality is one of the most promising digital technologies at present – look at the success of Pokémon Go – and it has the potential to change healthcare and everyday medicine completely for physicians and patients alike.
Excerpt:
Nurses can find veins easier with augmented reality The start-up company AccuVein is using AR technology to make both nurses’ and patients’ lives easier. AccuVein’s marketing specialist, Vinny Luciano said 40% of IVs (intravenous injections) miss the vein on the first stick, with the numbers getting worse for children and the elderly. AccuVein uses augmented reality by using a handheld scanner that projects over skin and shows nurses and doctors where veins are in the patients’ bodies. Luciano estimates that it’s been used on more than 10 million patients, making finding a vein on the first stick 3.5x more likely. Such technologies could assist healthcare professionals and extend their skills.
But even with a wealth of hardware partners over the years, Urbach says he’d never tried a pair of consumer VR glasses that could effectively trick his brain until he began working with Osterhout Design Group (ODG).
ODG has previously made military night-vision goggles, and enterprise-focused glasses that overlay digital objects onto the real world. But now the company is partnering with OTOY, and will break into the consumer AR/VR market with a model of glasses codenamed “Project Horizon.”
The glasses work by using a pair of micro OLED displays to reflect images into your eyes at 120 frames-per-second. And the quality blew Urbach away, he tells Business Insider.
You could overlay images onto the real world in a way that didn’t appear “ghost-like.” We have the ability to do true opacity matching,” he says.
Live streaming VR events continue to make the news. First, it was the amazing Reggie Watts performance on AltspaceVR. Now the startup Rivet has launched an iOS app (sorry, Android still to come) for live streams of concerts. As the musician, record producer and visual artist Brian Eno once said,
You can’t really imagine music without technology.
In the near future, we may not be able to imagine a live performance without the option of a live stream in virtual reality.
While statistics on VR use in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies (including zSpace, Alchemy VRand Immersive VR Education) solely dedicated to providing schools with packaged educational curriculum and content, teacher training and technological tools to support VR-based instruction in the classroom. Myriad articles, studies and conference presentations attest to the great success of 3D immersion and VR technology in hundreds of classrooms in educationally progressive schools and learning labs in the U.S. and Europe.
…
Much of this early foray into VR-based learning has centered on the hard sciences — biology, anatomy, geology and astronomy — as the curricular focus and learning opportunities are notably enriched through interaction with dimensional objects, animals and environments. The World of Comenius project, a biology lesson at a school in the Czech Republic that employed a Leap Motion controller and specially adapted Oculus Rift DK2 headsets, stands as an exemplary model of innovative scientific learning.
In other areas of education, many classes have used VR tools to collaboratively construct architectural models, recreations of historic or natural sites and other spatial renderings. Instructors also have used VR technology to engage students in topics related to literature, history and economics by offering a deeply immersive sense of place and time, whether historic or evolving.
“Perhaps the most utopian application of this technology will be seen in terms of bridging cultures and fostering understanding among young students.”
The Cleveland Clinic has partnered with Case Western Reserve University to release a Microsoft HoloLens app that allows users to explore the human body using augmented reality technology. The HoloLens is a headset that superimposes computer generated 3D graphics onto a person’s field of view, essentially blending reality with virtual reality.
The HoloAnatomy app lets people explore a virtual human, to walk around it looking at details of different systems of the body, and to select which are showing. Even some sounds are replicated, such as that of the beating heart.
The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.
…
Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?
… Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.
Forget about buttons — think actions.
Eliminate the need for a cursor as feedback, but provide an alternative.
From DSC:
A vision for some edupreneurs out there:
Please create a system whereby machine-to-machine (M2M) communications would be used to detect and display the names and majors of students within a given classroom — complete with the photos of those students that are actually there at that particular time in a face-to-face classroom setting.
The data from that information gathering would be sent to a holographic image that the professor could make larger or smaller. The professor could use gesturing to point to a given student to ask them a question. Once that student has been asked a question, the color around that student is changed to reflect that the professor has already called upon them recently. (The setting that controls how long this “who’s-been-questioned-recently” information is maintained could be cleared every X minutes/hours/days.)
For more introverted students, they could choose to use their own device to broadcast their answer to a digital/online-based means vs. answering verbally in front of 50-200 other students.
Whatever you choose to call the grayish area sitting at the intersection of wearable technology, customizable sensors, and enhanced connectivity, it seems likely that this is where the next major breakthroughs in experience design will take place. Cisco estimates that by 2020, there will be 50 billion things connected to the Internet, so whether we settle on the “Internet of Things,” “The Internet of Everything,” or an “Experience Ecosystem,” the space is primed for explosive growth.
As Wendt notes, there is something different about interacting with smart objects and, “It’s our job to understand what that something is.” That something has been illusive in recent years, with users adopting to wearable tech slowly and a noticeable lack of sophisticated communication between objects. After all, an object sharing information with your smartphone or tablet only constitutes a relationship, it takes many more overlapping relationships to create an ecosystem.
“The true user interface is the gestalt of many things: industrial design, ergonomics, internal hardware, performance, error handling, visual design, interaction, latency, reliability, predictability, and more. For this reason I’m glad to see my profession moving from the concept of user “interface” (where the user interacts with the screen) to user ‘experience’ (where more the above factors are considered),” said Fernandez.
“I can design the best user interface in the world on paper, but if it’s not implemented well by the hardware and software engineers (e.g. the product is slow or buggy) then the product will fail. If you do a really good job of designing a product then everyone who sees it thinks it’s so obvious that they could have done it themselves.”
…
“So there will be a seamless, multimodal UI experience using sight, sound, voice, head motion, arm movement, gestures, typing on virtual keyboards, etc. There have been pieces of this in research for decades. Only now are we beginning to see some of these things emerge into the consumer market: with Google Glass, Siri, etc. We have further to go, but we’re on the road.”
From DSC: I’m intrigued with the overlay (or should I say integration?) of services/concepts similar to IFTTT over the infrastructure of machine-to-machine communications and sensors such as iBeacons. I think we’re moving into entirely new end user experiences and affordances enabled by these technologies. I’m looking for opportunities that might benefit students, teachers, faculty members, and trainers.
We recently covered a (at that point unnamed) VR project by developer Tomáš “Frooxius” Marian?ík, the mind behind the stunning ‘Sightline’ VR series of demos. The project fused Leap Motion skeletal hand tracking with Oculus Rift DK2 positional tracking to produce an impressively intuitive VR interface. Now, a new video of the interface shows impressive progress. We catch up with Tomáš to find out some more about this mysterious VR project.
Enter the ‘World of Comenius’
The virtual reality resurgence that is currently underway, necessarily and predictably concentrates on bringing people new ways to consume and experience media and games. This is where virtual reality has the best chance of breaking through as a viable technology, one that will appeal to consumers worldwide. But virtual reality’s greatest impact, at least in terms of historical worth to society, could and probably will come in the form of non-entertainment based fields.
Computer-based technology continues to evolve at an ever-accelerating rate, creating both opportunities and challenges for science centers and museums. We are now seeing computing enter new realms, one that are potentially more promising for exhibit development than earlier ones.
Samsung has made some incremental improvements to its Smart TV platform for 2014. During International CES the company unveiled the Multi-Link feature, which lets you split the screen and use one half to get more information about content you are watching. For example, you can watch live TV on half the screen and get search results from a web browser on the other or seek out relevant YouTube content. In effect, the company is enabling ‘companion’ or Second Screen activities but on the main screen.
Items re: IBM and Watson:
IBM forms new Watson Group to meet growing demand for cognitive innovations — from IBM.com Headquartered in NYC’s “Silicon Alley,” New IBM Watson Group to Include Watson Innovation Hub, Fueling New Products and Start-ups Watson Group Introduces New Cloud Solutions to Accelerate Research, Visualize Big Data and Enable Analytics Exploration .
IBM bets big on Watson-branded cognitive computing – from CIO.com IBM will dedicate a third of its research projects to cognitive computing, IBM execs revealed at the launch of its Watson business unit .
Nuance Communications, Inc. (NASDAQ: NUAN) today announced that its Dragon TV platform now includes voice biometrics for TVs, set-top boxes and second screen applications, creating an even more personalized TV experience through voice. Upon hearing a person’s unique voice, Dragon TV not only understands what is being said, but authenticates who is saying it, and from there enables personalized content for different members of a household. As a result, individuals can have immediate access to their own preferred channels and content, customized home screens and social media networks.
From DSC: Re: this last one from Nuance, imagine using this to get rid of the problem/question in online learning — is it really Student A taking that quiz? Also, this type of technology could open up possibilities for personalized playlists/content for each learner.
With “From the Campfire to the Holodeck: Creating Engaging and Powerful 21st Century Learning Environments,” award-winning futurist and educational consultant David Thornburg sets out to provide schools with a guidebook for transitioning from traditional classrooms and lecture halls to the immersive, student-centered, technologically driven learning experience of tomorrow. That’s where the title comes in, if you haven’t picked up on it yet, as it takes teachers from being the “sage on the stage” dictating everything to students to being the “guide at the side,” facilitating the experience.