The Mobile Crane Simulator combines an Oculus headset with a modular rig to greatly reduce the cost of training. The system, from Industrial Training International and Serious Labs, Inc, will debut at the ConExpo Event this March in Las Vegas. The designers chose the Oculus for its comfort and portability, but the set-up supports OpenVR, allowing it to potentially also work on the Vive. (The “mobile” in the device’s name refers to a type of crane, rather than to mobile VR.) – ROAD TO VR
Apple’s (AAPL) upcoming iPhone 8 smartphone will include a 3D-sensing module to enable augmented-reality applications, Rosenblatt Securities analyst Jun Zhang said Wednesday. Apple has included the 3D-sensing module in all three current prototypes of the iPhone 8, which have screen sizes of 4.7, 5.1 and 5.5 inches, he said. “We believe Apple’s 3D sensing might provide a better user experience with more applications,” Zhang said in a research report. “So far, we think 3D sensing aims to provide an improved smartphone experience with a VR/AR environment.”
Apple’s iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)
As we look forward to 2017 then, we’ve reached out to a bunch of industry experts and insiders to get their views on where we’re headed over the next 12 months.
2016 provided hints of where Facebook, HTC, Sony, Google, and more will take their headsets in the near future, but where does the industry’s best and brightest think we’ll end up this time next year? With CES, the year’s first major event, now in the books, let’s hear from some those that work with VR itself about what happens next.
We asked all of these developers the same four questions:
1) What do you think will happen to the VR/AR market in 2017? 2) What NEEDS to happen to the VR AR market in 2017? 3) What will be the big breakthroughs and innovations of 2017? 4) Will 2017 finally be the “year of VR?”
The MEL app turned my iPhone 6 into a virtual microscope, letting me walk through 360 degree, 3-D representations of the molecules featured in the experiment kits.
Labster is exploring new platforms by which students can access its laboratory simulations and is pleased to announce the release of its first Google Daydream-compatible virtual reality (VR) simulation, ‘Labster: World of Science’. This new simulation, modeled on Labster’s original ‘Lab Safety’ virtual lab, continues to incorporate scientific learning alongside of a specific context, enriched by story-telling elements. The use of the Google VR platform has enabled Labster to fully immerse the student, or science enthusiast, in a wet lab that can easily be navigated with intuitive usage of Daydream’s handheld controller.
Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things.
In November, we launched Daydream with the goal of bringing high quality, mobile VR to everyone. With the Daydream View headset and controller, and a Daydream-ready phone like the Pixel or Moto Z, you can explore new worlds, kick back in your personal VR cinema and play games that put you in the center of the action.
Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics, and high-fidelity sensors for precise head tracking. To give you even more choices to enjoy Daydream, today we’re welcoming new devices that will soon join the Daydream-ready family.
Kessler Foundation, one of the largest public charities in the United States, is awarding a virtual reality training project to support high school students with disabilities. The foundation is providing a two-year, $485,000 Signature Employment Grant to the University of Michigan in Ann Arbor, to launch the Virtual Reality Job Interview Training program. Kessler Foundation says, the VR program will allow for highly personalized role-play, with precise feedback and coaching that may be repeated as often as desired without fear or embarrassment.
Deep-water safety training goes virtual — from shell.com by Soh Chin Ong How a visit to a shopping centre led to the use of virtual reality safety training for a new oil production project, Malikai, in the deep waters off Sabah in Malaysia.
11 Ed Tech Trends to Watch in 2017 — from campustechnology.com by Rhea Kelly — with Susan Aldridge, Gerard Au, myself, Marci Powell, & Phil Ventimiglia Five higher ed leaders analyze the hottest trends in education technology this year.
From DSC: I greatly enjoyed this project and appreciated being able to work with Rhea, Susan, Gerard, Marci, and Phil.
From DSC: In the future, I’d like to see holograms provide stunning visual centerpieces for the entrance ways into libraries, or in our classrooms, or in our art galleries, recital halls, and more. The object(s), person(s), scene(s) could change into something else, providing a visually engaging experience that sets a tone for that space, time, and/or event.
Eventually, perhaps these types of technologies/setups will even be a way to display artwork within our homes and apartments.
The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.
The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.
“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”
Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.
…
At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.
…
At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.
As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!
2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.
Some of them are already out while others are in development.
It’s no secret that we here at Labster are pretty excited about VR. However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.
Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.
Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.
…
According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.
The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.
Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities
Excerpt:
German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.
German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.
Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.
Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.
Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films. First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.
If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.
The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.
The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.
For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.
All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.
Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”
“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.
From DSC: I recently met Maaroof Fakhri at theNext Generation Learning Spaces Conference. It was a pleasure to meet him and hear him speak of the work they are doing at Labster (which is located in Denmark). He is very innovative, and he shines forth with a high degree of energy, creativity, and innovation.
Keep an eye on the work they are doing. Very sharp.
Learnathons, on the other hand are optimized sessions that teach participants how to apply what they learn as soon as possible. They are on the opposite end of how classroom teaching is organized, with lessons spread out over the course of a semester focusing on theory and weekly practice. They are a fairly new concept, but have created an environment for learning that is speeding up comprehension and application to levels that aren’t seen elsewhere.
Making high school science labs more real, more engaging, and more accessible Remote Online laboratories (iLabs) are experimental facilities that can be accessed through the Internet, allowing students and educators to carry out experiments from anywhere at any time.
Stepping in front of a classroom of skeptical students can be nerve-wracking for first-time teachers, but a new teaching platform at the University of Central Florida gives educators-in-training the option of conquering their classroom jitters in a virtual environment.
Educators must navigate social, pedagogical and professional hurdles all at once. And TeachLive is the first of its kind — a classroom simulator that can emulate these challenges and scale its difficulty to the specific needs of the teacher.
TeachLive places a teacher-in-training in a virtual classroom populated by computer-generated students. A Skype conference call and a Microsoft Kinect motion sensor power the high-tech pantomiming behind the platform. It’s currently being used at more than 80 campuses across the U.S. to train some of the next generation of educators, and it appears to be working.
TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk. This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.
It’s a vision that involves a multitude of technologies — technologies and trends that we continue to see being developed and ones that could easily converge in the not-too-distant future to offer us some powerful opportunities for lifelong learning!
Consider that in won’t be very long before a learner will be able to reinvent himself/herself throughout their lifetime, for a very affordable price — while taking ala carte courses from some of the best professors, trainers, leaders, and experts throughout the world, all from the comfort of their living room. (Not to mention tapping into streams of content that will be available on such platforms.)
So when I noticed that Lynda.com now has a Roku channel for the big screen, it got my attention.
Lets add a few more pieces to the puzzle, given that some other relevant trends are developing quite nicely:
tvOS-based apps are now possible — and already there are over 2600 of them and it’s only been a month or so since Apple made this new platform available to the masses
Now, let’s add the ability to take courses online via a virtual reality interface — globally, at any time; VR is poised to have some big years in 2016 and 2017!
Lynda.com and LinkedIn.com’s fairly recent merger and their developing capabilities to offer micro-credentials, badges, and competency-based education (CBE) — while keeping track of the courses that a learner has taken
The need for lifelong learning is now a requirement, as we need to continually reinvent ourselves — especially given the increasing pace of change and as complete industries are impacted (broadsided), almost overnight
Big data, algorithms, and artificial intelligence (AI) continue to pick up steam; for example, consider the cognitive computing capabilities being developed in IBM’s Watson — which should be able to deliver personalized digital playlists and likely some level of intelligent tutoring as well
Courses could be offered at a fraction of the cost, as MOOC-sized classes could distribute the costs over a greater # of people and back end systems could help grade/assess the students’ work; plus the corporate world continues to use MOOCs to cost-effectively train their employees across the globe (MOOCs would thrive on such a tvOS-based platform, whereby students could watch lectures, demonstrations, and simulations on the big screen and then communicate with each other via their second screens*)
As the trends of machine-to-machine communications (M2M) and the Internet of Things (IoT) pick up, relevant courses/modules will likely be instantly presented to people to learn about a particular topic or task. For example, I purchased a crib and I want to know how to put it together. The chip in the crib communicates to my Smart TV or to my augmented reality glasses/headset, and then a system loads up some multimedia-based training/instructions on how to put it together.
Streams of content continue to be developed and offered — via blogs, via channels like Periscope and Meerkat, via social media-based channels, and via other channels — and these streams of multimedia-based content should prove to be highly useful to individual learners as well as for communities of practice
Anyway, these next few years will be packed with change — the pace of which will likely take us by surprise. We need to keep our eyes upward and outward — peering into the horizons rather than looking downwards — doing so should reduce the chance of us getting broadsided!
*It’s also possible that AR and VR will create
a future whereby we only need 1 “screen”
Addendum: After I wrote/published the item above…it was interesting to then see the item below:
MUNICH, Dec. 15, 2015 /PRNewswire/ — IBM (NYSE: IBM) today announced the opening of its global headquarters for Watson Internet of Things (IoT), launching a series of new offerings, capabilities and ecosystem partners designed to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT. These new offerings will be available through the IBM Watson IoT Cloud, the company’s global platform for IoT business and developers.
Microsoft will share its LIQUi|> (no, that’s not a typo) simulator software with the public, so academics can test quantum computing operations on laptops or in the cloud.
On Friday, Microsoft is releasing simulation software that it says will let academics, scientists, or even do-it-yourself eggheads simulate quantum computing on their laptops.
The promise of quantum computing, which breaks the nuts-and-bolts of computing down to the sub-atomic level, is that it can solve problems that go far beyond the capabilities of even today’s most powerful computers.
Long ago, someone coined the adage, ‘a picture is worth a thousand words.’ Illustrators and teachers have grasped that simple truism throughout thousands of years of human history – It’s a fact that many, if not most of us, are visual learners. That’s especially true when it comes to things mechanical. That said, it’s 2015, and certainly time for a twenty first century iteration of this venerable learning principle to manifest. And in fact, it has – Consider Explain 3D – An interactive encyclopedia of 3D simulations and visualizations that helps kids, teachers or parents to explain and understand how things work. Explain 3D is a great tool to help youngsters develop the skills of logical thinking and imagination, while poking around in some very cool modern technologies and technical stuff.
The company put out the call for app submissions on Wednesday for tvOS. The Apple TV App Store will debut as Apple TV units are shipped out next week.
…
The main attraction of Apple TV is a remote with a glass touch surface and a Siri button that allows users to search by voice. Apple tvOS is capable of running apps ranging from Airbnb to Zillow and games like Crossy Road. Another major perk of Apple TV will be universal search, which allows users to scan for movies and television shows and see results from multiple sources, instead of having to conduct the same search within multiple apps.
Apple CEO Tim Cook hopes the device will simplify how viewers consume content.
From DSC: The days of developing for a “TV”-based OS are now upon us: tvOS is here. I put “TV” in quotes because what we know of the television in the year 2015 may look entirely different 5-10 years from now.
Once developed, things like lifelong learning, web-based learner profiles, badges and/or certifications, communities of practice, learning hubs, smart classrooms, virtual tutoring, virtual field trips, AI-based digital learning playlists, and more will never be the same again.
Addendum on 10/26/15: The article below discusses one piece of the bundle of technologies that I’m trying to get at via my Learning from the Living [Class] Room Vision:
No More Pencils, No More Books — from by Will Oremus Artificially intelligent software is replacing the textbook—and reshaping American education.
Excerpt: ALEKS starts everyone at the same point. But from the moment students begin to answer the practice questions that it automatically generates for them, ALEKS’ machine-learning algorithms are analyzing their responses to figure out which concepts they understand and which they don’t. A few wrong answers to a given type of question, and the program may prompt them to read some background materials, watch a short video lecture, or view some hints on what they might be doing wrong. But if they’re breezing through a set of questions on, say, linear inequalities, it may whisk them on to polynomials and factoring. Master that, and ALEKS will ask if they’re ready to take a test. Pass, and they’re on to exponents—unless they’d prefer to take a detour into a different topic, like data analysis and probability. So long as they’ve mastered the prerequisites, which topic comes next is up to them.
From DSC: We’ll also be seeing the integration of the areas listed below with this type of “TV”-based OS/platform:
Artificial Intelligence (AI)
Data mining and analytics
Learning recommendation engines
Digital learning playlists
New forms of Human Computer Interfaces (HCI)
Intelligent tutoring
Social learning / networks
Videoconferencing with numerous other learners from across the globe
Virtual tutoring, virtual field trips, and virtual schools
Online learning to the Nth degree
Web-based learner profiles
Multimedia (including animations, simulations, and more)
Advanced forms of digital storytelling
and, most assuredly, more choice & more control.
Competency-based education and much lower cost alternatives could also be possible with this type of learning environment. The key will be to watch — or better yet, to design and create — what becomes of what we’re currently calling the television, and what new affordances/services the “TV” begins to offer us.
SAN FRANCISCO — September 9, 2015 — Apple® today announced the all-new Apple TV®, bringing a revolutionary experience to the living room based on apps built for the television. Apps on Apple TV let you choose what to watch and when you watch it. The new Apple TV’s remote features Siri®, so you can search with your voice for TV shows and movies across multiple content providers simultaneously.
The all-new Apple TV is built from the ground up with a new generation of high-performance hardware and introduces an intuitive and fun user interface using the Siri Remote™. Apple TV runs the all-new tvOS™ operating system, based on Apple’s iOS, enabling millions of iOS developers to create innovative new apps and games specifically for Apple TV and deliver them directly to users through the new Apple TV App Store™.
…
tvOS is the new operating system for Apple TV, and the tvOS SDK provides tools and APIs for developers to create amazing experiences for the living room the same way they created a global app phenomenon for iPhone® and iPad®. The new, more powerful Apple TV features the Apple-designed A8 chip for even better performance so developers can build engaging games and custom content apps for the TV. tvOS supports key iOS technologies including Metal™, for detailed graphics, complex visual effects and Game Center, to play and share games with friends.
As faculty at colleges and universities are all too aware, it’s hard to do two jobs at the same time. Since the advent of the modern research university over a century ago, faculty have effectively held down two jobs: conducting (and publishing) research and teaching students.
Arguments for the dual-role professor seem logical. Knowledge production should make one a better instructor. Students should benefit from teachers producing the latest knowledge. But there’s precious little data to support that adding the research job to the instruction job improves student outcomes.
The downside is that both jobs require significant expertise and commitment to do well.
…
There is an emerging consensus as to what works best for onground instruction. It’s called the Dynamic Classroom, and it looks like this:
Flip classroom so “transfer of information” occurs ahead of class
Incorporate technology in the classroom (handheld clickers or smartphone apps) to quickly ascertain whether students have understood key concepts
Integrate active learning techniques to improve understanding of key concepts, including peer learning, group problem solving, project-based learning and experiential learning via studios and workshops
Include “perspective transformation” exercises wherein students change their frames of reference by critically reflecting on their assumptions
From DSC: First of all, I second the idea of splitting up the responsibilities of researching and teaching. Both roles are full-time jobs and require different skillsets. With students paying ever higher tuition bills, students deserve to take their courses from professors who know how to teach (not an easy job by the way!).
But the unbundling doesn’t — and shouldn’t — stop with the splitting up of the teaching and research roles.
Let’s look at another of the instructional shifts that Ryan considers — and that is the move towards the use of smartphones and apps:
In this environment, we can imagine one app for Economics 101 and another for Psychology 110. They are also the ideal platform for simulations and gamified learning and can tailor the user experience further by incorporating real-world inputs (e.g., location of the student) into the material. But, like the dynamic classroom, apps require an unparalleled level of development and instructional expertise—a full-time job for faculty who will be teaching online.
I think there’s some serious potential with this approach, especially given the trend towards more mobile computing and the affordances that come with using mobile technologies.
However, when we start delivering teaching and learning experiences that involve the digital/virtual realm like this, we’re instantly catapulted into a world that requires additional skills. As such, I highly doubt that the majority of faculty members have the time, interests, passions, or the abilities/gifts to code such apps. They would have to simultaneously be (or become) a programmer/developer, an instructional designer, a graphic designer, a copyright expert, an expert in accessibility, instantly knowledgeable in user interface and user experience design, as well as continue to serve as the Subject Matter Expert (SME) — and I could list other roles as well. That is why we need TEAMS of specialists. If the trends towards moving more of our teaching and learning experiences online and/or into such digital realms continue, then our current models simply won’t cut it anymore, at least in the majority of cases.
I appreciate Ryan’s article and second the main idea of splitting up the teaching and researching responsibilities. But again, when we’re talking developing apps, we had better be talking employing the use of teams — or the students will likely not be better off.
At some colleges, media teams sit down with professors ahead of time and lay out long-term strategies to determine how video may enhance the learning experience of students in their courses.
…
The media team offers instructors a number of planning worksheets to encourage them to think more about the purpose of videos in their courses.