Virtual reality — once the stuff of science fiction — is still in its infancy. But there’s already a gold rush around the technology, which plunges viewers into a simulated 3-D environment and lets them explore their surroundings as if they were really there.
Technology and entertainment giants are betting billions that virtual reality is much more than a passing fad, one that will revolutionize the way we experience movies, news, sporting events, video games and more.
Meanwhile, filmmakers and other creators are grappling with an entirely new storytelling language and dealing with some formidable challenges — claustrophobic headsets that can make people cybersick.
…
Here, some of the other pioneers in film, journalism, sports and gaming talk about the potential and struggles of building a new art form from the ground up.
…
“You really engage on scene in a way that gives you this incredible connection to where you are,” Ms. de la Peña said. “And that’s why, early on, I was calling it an empathy generator, an empathy machine.”
From DSC: This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want. Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)
Gerd Leonhard’s work is relevant here. In the resource immediately below, Gerd asserts:
I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.
I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.
A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.
As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.
In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.
“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”
Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.
Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.
Gartner predicts our digital future— from gartner.com by Heather Levy Gartner’s Top 10 Predictions herald what it means to be human in a digital world.
Excerpt:
Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review. Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.
These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.
The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.
Along these same lines, also see:
Augmented Reality: Figuring Out Where the Law Fits— from rdmag.com by Greg Watry Excerpt:
With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.
Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?
“In the future, moral philosophy will be a key industry sector,” says Russell.
Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.
An excerpt from:
THREE: CHALLENGES FOR LAW AND POLICY
AR systems change human experience and, consequently, stand to challenge certain assumptions of law and policy. The issues AR systems raise may be divided into roughly two categories. The first is collection, referring to the capacity of AR devices to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability. The second rough category is display, referring to the capacity of AR to overlay information over people and places in something like real-time. Display raises a variety of complex issues ranging from
possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling. Policymakers and stakeholders interested in AR should consider what these issues mean for them. Issues related to the collection of information include…
Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.
Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.
The department transcends traditional roles when data enters the picture.
Many ethical questions posed through technology easily come and go because they seem out of this world.
Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
…
“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games. That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”
As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives. Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low. Algorithms are defining the future of business and even our everyday lives.
…
Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming. The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”
Robots are learning to say “no” to human orders — from quartz.com by Kit Eaton Excerpt:
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers in Massachusetts are trying something that many a science fiction movie has already anticipated: They’re teaching robots to say “no” to some instructions. For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming. But we’re building more-clever robots every day and we’re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
Addendum on 12/14/15:
Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
Ford, a lecturer in the computer science department, aims to develop gaming as a subfield of graphics at UCLA through her courses on virtual reality and artificial intelligence. This quarter, Ford’s virtual reality class, “Advanced Game Development for Virtual Reality,” is using gesture-tracking to create its own virtual reality games, following in the footsteps of her spring class, “Virtual Reality Game Development.”
Ford’s spring class produced several pre-patterns in virtual reality programming and presented games at the Special Interest Group on Computer Graphics and Interactive Techniques, or SIGGRAPH, an annual international graphics conference held by the Association of Computing Machinery in August.
Using Oculus Rift headsets, Ford said her spring virtual reality class used Unreal Engine 4, a collection of game development tools, to produce three-dimensional worlds within which the students created original games with coding.
It’s a 3-D world, so why not let students create and learn in 3-D? In so many disciplines — architecture, computer science, entertainment, engineering — it’s becoming increasingly useful to problem-solve and be creative in three dimensions. With 360 degree video, Google Earth’s 3-D maps, Oculus Rift’s virtual reality headset, and Google’s soon-to-be-released 3-D mapping phone, students, too, will be more immersed in 3-D technology than ever before. Luckily, there are some great tools out there to create 3-D projects in the classroom.
Most of us know Lytro for their light field cameras that capture scenes in a way that allows you to refocus an image anywhere you want with the click of a button without having to take a new image. I’ll admit, I thought it was a neat trick, but as a commercial photographer, I never saw how it would apply to someone like myself. Well, Lytro has blown me away today with the announcement of their new virtual reality camera system that works much like their light field cameras and allows the user to move within a video environment (not a computer-rendered space) while wearing a virtual reality headset. They have officially changed the game.
Even though fitness trackers like FitBit are increasing in popularity among college students, Georgieva and Craig predict wearables and immersive and virtual reality spaces will become must-haves on college campuses.
“It’s a tremendous opportunity for teaching and learning,” Craig says. “[It provides] headgear for the eyes, virtual and augmented reality, and wearables for the brain.”
Georgieva says that as technologies like wearables evolve, students’ learning styles and visual literacy shift.
Students are more immersed in a visual culture and rely on pictures to communicate, analyze and learn.
7 up-and-coming wearable technologies — from campustechnology.com by Leila Meyer Sensory messaging devices, stress-reducing headpieces, biometric authentication bands and more — these cutting-edge wearables could soon be coming to your campus.
Anyone who’s been in hospital will know how long nurses spend taking hourly measurements for blood pressure, and much more besides. If a hospital could put a wearable device on each patient to automate the process, the time saved would be enormous – and that’s just the beginning of how the Internet of Things (IoT) promises to change healthcare.
Over the next five years the traditional ‘doctor-patient’ model will completely open up as self-monitoring devices do away with the need for routine check-ups and appointments, and IoT sensors in our homes and on our bodies increasingly allow us to look after ourselves.
Forrester released its annual report on US consumer technology use on [9/28/15], and the findings make encouraging reading for wearable developers, manufacturers and enthusiasts.
Twenty-one percent of all “online” adults in the U.S. now own a wearable device of some kind, with the younger generation leading the way in adoption.
Everybody is talking about Google Glass 2.0. To paraphrase from Lavoisier’s “Traité Élémentaire de Chimie”. “Rien ne se perd, rien ne se crée, tout se transforme.” (“Nothing is lost, nothing is created, everything is transformed.”)
I’m sure you’ve heard that Google Glass is dead. Believe it or not, it is still alive. Google Glass never died. Google Glass “reborn” into a second version. Like the phoenix bird.
There is much misinformation about Google Glass at the moment. Let’s get the facts straight. Here is everything you need to know about the new Google Glass 2.0.
Google Glass 2.0: Versions And Design
Google Glass 2.0 is in good hands. The old Glass team is now called Project Aura and behind the new team is Tony Fadell. You might now Mr. Fadel as the founder of NEST. But, few of you know that he designed the first iPod in 2001.
A wearable sensor that tracks strain on a pitcher’s elbow is making waves in major league baseball (MLB). The mThrow, by Motus Global, is a simple, lightweight sleeve, designed to be worn every time a pitcher throws. The smart throwing sleeve and companion iOS app collects and analyzes real-time biomechanical data from each game and practice, and uses that data to calculate workloads and recommend daily throw limits. This season, 27 MLB teams and their minor league affiliates are trying out the device, in the hope that it will help monitor pitchers’ workloads, improve pitching mechanics, and prevent injuries.
More than $40 million in funding later, the now New York-based company has launched Blippar Lab, a research and development unit that will be used to explore “innovative use cases for AR and VR across a variety of industries.” The first of these Blippar Lab-branded products is Cardio VR for Google Cardboard, which is aimed at helping children learn about the human body through virtual reality.
If the distinction between augmented reality and virtual reality is lost on you, here’s a summation. AR is when the real, physical world is “augmented” with digital content, such as images or text layered upon a screen. VR is the full recreation of a world in digital, or “virtual,” form.
Oculus Social Alpha launches on Gear VR — from uploadvr.com Excerpt:
Today is a big day for virtual reality. It is the first day in history when people owning consumer VR headsets, running publicly available software, were able to connect with one another in a virtual space using the Internet.
First Ever Commercial Virtual Reality System to Fly in Space Brings Non-Astronauts Aboard ISS — from deepspaceindustries.com Asteroid Mining Company and VR Data Analytics Company Approved for ISS Floating Tour Excerpt:
MOFFETT FIELD, CA – For the first time ever, a virtual reality recording system will be flown in space. The project, announced today by Deep Space Industries (DSI), will use a spherical video capture system to create a virtual reality float-through tour of the International Space Station’s science lab.
The company put out the call for app submissions on Wednesday for tvOS. The Apple TV App Store will debut as Apple TV units are shipped out next week.
…
The main attraction of Apple TV is a remote with a glass touch surface and a Siri button that allows users to search by voice. Apple tvOS is capable of running apps ranging from Airbnb to Zillow and games like Crossy Road. Another major perk of Apple TV will be universal search, which allows users to scan for movies and television shows and see results from multiple sources, instead of having to conduct the same search within multiple apps.
Apple CEO Tim Cook hopes the device will simplify how viewers consume content.
From DSC: The days of developing for a “TV”-based OS are now upon us: tvOS is here. I put “TV” in quotes because what we know of the television in the year 2015 may look entirely different 5-10 years from now.
Once developed, things like lifelong learning, web-based learner profiles, badges and/or certifications, communities of practice, learning hubs, smart classrooms, virtual tutoring, virtual field trips, AI-based digital learning playlists, and more will never be the same again.
Addendum on 10/26/15: The article below discusses one piece of the bundle of technologies that I’m trying to get at via my Learning from the Living [Class] Room Vision:
No More Pencils, No More Books — from by Will Oremus Artificially intelligent software is replacing the textbook—and reshaping American education.
Excerpt: ALEKS starts everyone at the same point. But from the moment students begin to answer the practice questions that it automatically generates for them, ALEKS’ machine-learning algorithms are analyzing their responses to figure out which concepts they understand and which they don’t. A few wrong answers to a given type of question, and the program may prompt them to read some background materials, watch a short video lecture, or view some hints on what they might be doing wrong. But if they’re breezing through a set of questions on, say, linear inequalities, it may whisk them on to polynomials and factoring. Master that, and ALEKS will ask if they’re ready to take a test. Pass, and they’re on to exponents—unless they’d prefer to take a detour into a different topic, like data analysis and probability. So long as they’ve mastered the prerequisites, which topic comes next is up to them.
NEW YORK — The New York Times is collaborating with Google to launch a virtual reality project that includes a free app to view films on devices like smartphones.
The newspaper publisher said Tuesday the NYT VR app that will be available for download in both iOS and Google Play app stores starting Nov. 5. The first virtual-reality film in the project is titled “The Displaced” and follows the experiences of child refugees from south Sudan, eastern Ukraine and Syria.
Google will supply cardboard viewers that allow customers to view a three-dimensional version of the films. More than 1 million of the viewers will be distributed to home delivery subscribers early next month, and digital subscribers will receive emails with details on how to get the free viewers.
On their way to this month’s 70th United Nation’s General Assembly, the organization’s annual high-level meeting in New York, diplomats and world leaders will pass by a makeshift glass structure—both a glossy multi-media hub, and a gateway to an entirely different world.
The hub uses virtual reality to allow the UN attendees to see Jordan’s Zaatari camp for Syrian refugees through the eyes of a little girl. And, by using an immersive video portal, which will launch later this week, they will have the opportunity to have face-to-face conversations with residents of the camp.
The effort aims to put a human face on the high-level deliberations about the refugee crisis, which will likely dominate many conversations at the United Nations General Assembly (UNGA). UN Secretary General Ban Ki-Moon has called on the meeting to be “one of compassion, prevention and, above all, action.”
From DSC: VR-based apps have a great deal of potential to develop and practice greater empathy. See these related postings:
When it comes to virtual reality, the University of Maryland, Baltimore County is going for full immersion.
Armed with funding from the National Science Foundation, the university is set to build a virtual reality “environment” that’s designed to help researchers from different fields. It’s called PI2.
In the 15-by-20-foot room, stepping into virtual reality won’t necessarily require goggles.
A visualization wall at the University of Illinois at Chicago’s Electronic Visualization Lab.
UMBC officials say their project will be similar to this.(Photo courtesy of Planar)
Now you’re ready to turn your class into an immersive game, and everything you need is right here. With the help of these resources, you can develop your own gameful class, cook up a transmedia project, design a pervasive game or create your very own [Augmented Reality Game] ARG. Games aside, these links are useful for all types of creative learning projects. In most cases, what is on offer is free and/or web based, so only your imagination will be taxed.
If augmented reality could be a shared experience, it could change the way we will use the technology.
Something along these lines is currently in development at a Microsoft laboratory run by Jaron Lanier, one of the pioneers of VR since the 1980s through his company VPL Research. The project, called Comradre, allows multiple users to share virtual- and augmented-reality experiences, reports MIT Technology Review.
Because virtual reality takes place in a fully digital environment, it is not hugely difficult to put multiple users into the same virtual instance at the same time, wirelessly synced across multiple headsets.
vrfavs.com— some serious VR-related resources for you. Note: There are some NSFW items on there; so this is not for kids.
Together, virtual reality and augmented reality are expected to generate about $150 billion in revenue by the year 2020.
Of that staggering sum, according to data released today by Manatt Digital Media, $120 billion is likely to come from sales of augmented reality—with the lion’s share comprised of hardware, commerce, data, voice services, and film and TV projects—and $30 billion from virtual reality, mainly from games and hardware.
The report suggests that the major VR and AR areas that will be generating revenue fall into one of three categories: Content (gaming, film and TV, health care, education, and social); hardware and distribution (headsets, input devices like handheld controllers, graphics cards, video capture technologies, and online marketplaces); and software platforms and delivery services (content creation tools, capture, production, and delivery software, video game engines, analytics, file hosting and compression tools, and B2B and enterprise uses).
Talking about augmented reality technology in teaching and learning the first thing that comes to mind is this wonderful app called Aurasma. Since its release a few years ago, Aurasma gained so much in popularity and several teachers have already embraced it within their classrooms. For those of you who are not yet familiar with how Aurasma works and how to use in it in your class, this handy guide from Apple in Education is a great resource to start with.
The Oculus Touch virtual reality (VR) controllers finally have their first full videogames. A handful of titles were confirmed to support the kit back at the Oculus Connect 2 developer conference in September. But still one of the most impressive showcases of what these position-tracked devices can do exists in Oculus VR’s original tech demo, Toybox. [On 10/13/15], Oculus VR itself has released a new video that shows off what players are able to do within the software.
Much like sketching the first few lines on a blank canvas, the earliest prototypes of a VR project is an exciting time for fun and experimentation. Concepts evolve, interactions are created and discarded, and the demo begins to take shape.Competing with other 3D Jammers around the globe, Swedish game studio Pancake Storm has shared their #3DJam progress on Twitter, with some interesting twists and turns along the way. Pancake Storm started as a secondary school project for Samuel Andresen and Gabriel Löfqvist, who want to break into the world of VR development with their project, tentatively dubbed Wheel Smith and the Willchair.
Recently I learned about a new feature called Virtual Field Trips. In a partnership with 360 Cities, NearPod now gives teachers and students the opportunity to view pristine locations like the Taj Mahal, the Golden Gate Bridge, and The Great Wall of China. You can view famous architecture, famous artifacts, and even different planets! Virtual Field Trips are a great addition to any classroom.
Western University of Health Sciences in Pomona, Calif., has opened a first-of-its-kind virtual reality learning center that’s been designed to allow students from every program—dentistry, osteopathic medicine, veterinary medicine, physical therapy, and nursing—to learn through VR.
The Virtual Reality Learning Center currently houses four different VR technologies: the two zSpace displays, the Anatomage Virtual Dissection Table, the Oculus Rift, and Stanford anatomical models on iPad.
Robert W. Hasel, D.D.S., associate dean of simulation, immersion & digital learning at Western, says VR gives anatomical science teachers the ability to view and interact with anatomy in a way never before experienced. The virtual dissection table allows students to rotate the human body in 360 degrees, take it apart, identify specific structures, study individual systems, look at multiple views at the same time, take a trip inside the body, and look at holograms.
——————————————
——————————————
Addendum on 10/20/15:
Can Virtual Reality Replace the Cadaver Lab?— from centerdigitaled.com by Justine Brown Colleges are starting to use virtual reality platforms to augment or replace cadaver labs, saving universities hundreds of thousands of dollars.
Microsoft [on 10/6/15] announced a grip of new hardware, including Surface products, high-end smartphones, a wearable, Windows 10 for the Xbox One and its first laptop.
The laptop, called the Surface Book, stunned in an event marked largely by products Microsoft watchers were expecting.
Speaking at a Windows 10 briefing [on10/6/15] the company confirmed that applications for a HoloLens development kit are now open. Successful applicants will be able to pick up a kit for some $3,000 USD, and it will be arriving in the first quarter of 2016.
Microsoft’s HoloLens mixed reality (MR) head-mounted display (HMD) is still a relatively new concept, having only been revealed to the public in January 2015. The device has remained shrouded in mystery since that time, making a handful of showings at events such as E3. The company has revealed a lot more about the kit [on 10/6/15], though, confirming that the anticipated Development Edition will be arriving in Q1 2016 for applicants in the USA and Canada at a price of $3,000 USD. Now a new video has been revealed giving a glimpse at the kit’s origins.
Microsoft announced the second generation of the Microsoft Band at its event [on 10/6/15], a small update adding a new, more functional design and two sensors to track elevation and VO2 monitoring.
From DSC: Imagine what learning could look like w/ the same concepts found in theSkreens kickstarter campaign? Where you can use your mobile device to direct what you are seeing and interacting with on the larger screen? Hmmm… very interesting indeed! With applications not only in the home (and on the road), but also in the active classroom, the boardroom, and the training room.
Packed house at WilmerHale for the Robot Launch 2015 judging – although 2/3rds of the participants were attending and pitching remotely via video and web conferencing.
As consumers, we tend to focus on how virtual reality will work in our homes — the new types of games it allows, the insane 360-degree cinematic experiences, etc. Some of VR’s greatest potential, though, lays not at home, but in the classroom.
… Discovr Labs has built an interface and technology to help teachers use VR as a teaching tool. After the student straps on their headset, Discovr allows the teacher to select which module the student is interacting with, and to see exactly what the student sees; everything from the headset is beamed, wirelessly, to an all-seeing interface.
For now, Discovr is focusing on a local experience, with all of the students being in the same room as the teacher. Moving forward, they envision remote experiences where students and their teachers can come together in VR experiences regardless of their physical location.
Digital Elite developed a new line of low-cost head mounted augmented reality paper viewers specifically for the education market. A number of novel applications in the field of robotics, virtual physics and a unique book for autism is already readily supported by the viewer. More Apps in the pipeline are being developed to support other immersive and virtual experiences.
In a scientific study the new viewers have compared favorably against expensive VR headsets, such as the Samsung/Oculus Gear VR and Zeiss VR One. The viewers were also tested in a number of real-life education scenarios and Apps. One example is a virtual robot teaching physics and geography deployed in an Augmented Reality (AR) application to break down the final frontier between physical robots and their virtual counterparts. Results are being published in conferences in Hong Kong today and Korea later this month.
Addendum on 9/25/15:
Five emerging trends for innovative tech in education — from jisc.ac.uk by Matt Ramirez No longer simply future-gazing, technologies like augmented and virtual reality (AR/VR) are becoming firmly accepted by the education sector for adding value to learning experiences.
Dazzle It is a cool new augmented reality app that lets you remix artwork from artists including the Sir Peter Blake, Godfather of Pop Art – best known for designing the 1967 Beatles’ Sgt Pepper’s Lonely Hearts Club Band album cover.
Developed by digital design agency, Corporation Pop, it combines the latest augmented reality techniques with design to bring history to life. And notably, unlike most augmented reality apps, you don’t need a pre-supplied marker to view what you create in a real-world scene.
A FOREST park in Angus is to host the UK’s first live theatrical performance featuring augmented reality (AR) technology.
By downloading an app, audiences will be able to spot magical creatures through their smartphones and capture them on camera, before sharing the images with friends and family on social media.
DragonQuest, which will be performed in Monikie Country Park, allows visitors to wander around a forest using their smartphone to create images of fantastical creatures in addition to real-life characters and events on the set.
Various Application of Augmented Reality in Learning Different Subjects
Astronomy: AR can be used to make student understand about the relationship between the Sun and the Earth. Here AR technology can be used with 3D rendered sun and earth shapes.
Chemistry: Teachers can demonstrate what a molecule and atoms consist of using AR technology.
Biology: Teachers can use Augmented Reality to showcase their student’s body structure or anatomy. Teachers can show their students different types of organ and how they look in a 3D atmosphere. Students can even study human body structure on their own by using devices with AR embedded technology in it.
Physics: Physics is one of the subjects where AR technology can be used perfectly. Various kinematics properties can be easily understood by using AR technology.
Virtual Reality (VR) may be the type of educational breakthrough that comes along once in a generation, heralding a tectonic shift toward immersive content for teaching and instruction.
By presenting a complete view of the world in which it is situated, VR offers a new opportunity to close some of the pedagogical gaps that have appeared in 21st century classroom learning. These gaps stem from the fact that curriculum and content in education have not caught up with rapid technology advancements.
Below I introduce three of these gaps and how they might be addressed by virtual reality content soon to be produced and distributed commercially.
The Lawrence school district recently purchased 20 Google Cardboards, which beginning this school year are available for teachers to check out for use in their classrooms, said Joe Smysor, the district’s technology integration specialist. Cardboard works in conjunction with a smartphone app to deliver a 3-D, 360-degree navigable image. Students can use apps with Cardboard to virtually visit museums, landmarks or cities around the world.
“It’s going to allow teachers to take their class on field trips where school buses couldn’t otherwise go,” Smysor said. “That could be back 100 years in the past, or underwater.”
While college students are settling into their dorms, it’s already time for next year’s class of high school students to narrow down their potential school choices and schedule campus visits. Or maybe they can just stay home and start the journey virtually.
A site called YouVisit has a surprisingly large set of virtual-reality college tours available. All the major Texas colleges are represented, and one of them, Trinity University, has been making a big push to get cheap sets of cardboard VR goggles out to families at recruiting events such as college fairs. Trinity sent me a pair of the cardboard glasses. The virtual visit to the campus certainly wasn’t the same as being there, but to get at least a visual sense of what the campus looks like and to be generally wowed by the 3-D/360-degree effect, it was worth the trip.
FARMINGTON, Conn. & DENVER–(BUSINESS WIRE)–Regis University today unveiled a unique new way for prospective students to tour and experience the school’s scenic 100-acre campus. Through an interactive, immersive experience created by independent agency Primacy, students are able to put on an Oculus Rift virtual reality (VR) headset and immediately be transformed to the campus where they can get a full, 360-degree tour as if they were on site – including viewing daybreak runs at Red Rocks, being immersed in Regis’ experiential nursing skills lab and visiting the campus pub to watch a live Jenga game.
Odyssey is the first camera rig built specifically for Google’s Jump platform, which was also announced at this year’s I/O conference. Jump is an entire virtual reality ecosystem that, in theory, will make it easier to both create and consume VR content. With Jump, Google created open plans that companies can use to build their own 16-camera rig (GoPro just happened to be the first), as well as assemble software that can recreate the scene being captured in much higher quality than most existing image stitching software can. Eventually, Jump videos will be hosted in YouTube; think of it as the next logical step following YouTube’s inclusion of 360-degree videos earlier this year.
Are you a classical music fan? It’s a question most people would probably say no to, and the Los Angeles Philharmonic knows that.
“People are intimidated by classical music,” said Amy Seidenwurm, the Philharmonic’s director of digital initiatives. “They don’t come to concerts because they feel it might not be for them.”
But to change those minds, the LA Phil is turning to virtual reality. For the next month, it will be driving around the Los Angeles area to parks, festivals and museums, in a van outfitted with real carpeting and seats from the Walt Disney Concert Hall — and six Samsung Gear VR headsets, which have been loaded with a special video performance of Beethoven’s Fifth Symphony. (You know the one: Dun-dun-dun DUNNNN.)