High engagement schools start from a different conception: knowledge co-creation and active production. They design a very different learner experience and support it with a student-centered culture and opportunities to improve self-regulation, initiative and persistence–all key to self-directed learning.
… Why Does Self-Direction Matter?
Growth of the freelance- and gig-economy makes self-direction an imperative. But it’s also increasingly important inside organizations. David Rattray of the LA Chamber said, “Employees need to change their disposition toward employers away from work for someone else to an attitude of working for myself–agency, self-discipline, initiative, and risk-taking are all important on the job.
Many adults are working in roles where they need to be more independent and efficiently manage their own time often through a series of projects. Employers are looking for candidates that on their own are able to identify a driving question, determine a team they need to help answer that question, able to effectively work with that team, execute and manage the project–through multiple iterations with lots of feedback–and then reflect and evaluate their work. Students should be developing self-direction by learning in the same way.
… Where to Start
If you’re a teacher or parent, you can ask good questions rather than provide simple answers; you can help students use a to-do list, develop a personal learning plan and a portfolio of their best work.
If you’re a principal, you can propose advisory period to promote self-direction and other success skills. You can make time in the schedule for more self-directed work. For example, Singapore American School added a makerspace with a genius hour and independent study courses to encourage to pursue self-directed learning.
From DSC: In the future, I’d like to see holograms provide stunning visual centerpieces for the entrance ways into libraries, or in our classrooms, or in our art galleries, recital halls, and more. The object(s), person(s), scene(s) could change into something else, providing a visually engaging experience that sets a tone for that space, time, and/or event.
Eventually, perhaps these types of technologies/setups will even be a way to display artwork within our homes and apartments.
From an email from Elliott Masie and the Masie Center:
This 35-page eBook is packed with content, context, conversations, video links, and curated resources that include:
Learning Perspectives from Anderson Cooper, Scott Kelly, Tiffany Shlain, George Takei, Richard Culatta, Karl Kapp, Nancy DeViney, and other Learning 2016 Keynotes
Graphic Illustrations from Deirdre Crowley, Crowley & Co.
Video Links for Content Segments
Learning Perspectives from Elliott Masie
Segments focusing on:
Brain & Cognitive Science
Gamification & Gaming
Micro-Learning
Visual Storytelling
Connected & Flipped Classrooms
Compliance & Learning
Engagement in Virtual Learning
Video & Learning
Virtual Reality & Learning
And much more!
We have created this as an open source, shareable resource that will extend the learning from Learning 2016 to our colleagues around the world. We are using the Open Creative Commons license, so feel free to share!
We believe that CURATION, focusing on extending and organizing follow-up content, is a growing and critical dimension of any learning event. We hope that you find your eBook of value!
The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.
The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.
“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”
Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.
…
At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.
…
At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.
As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
Etc.
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
A school bus, virtual reality, & an out-of-this-world journey — from goodmenproject.com “Field Trip To Mars” is the barrier-shattering outcome of an ambitious mission to give a busload of people the same, Virtual Reality experience – going to Mars.
Excerpt:
Inspiration was Lockheed‘s goal when it asked its creative resources, led by McCann, to create the world’s first mobile group Virtual Reality experience. As one creator notes, VR now is essentially a private, isolating experience. But wouldn’t it be cool to give a busload of people the same, simultaneous VR experience? And then – just to make it really challenging – put the whole thing on wheels?
“Field Trip To Mars” is the barrier-shattering outcome of this ambitious mission.
From DSC: This is incredible! Very well done. The visual experience tracks the corresponding speeds of the bus and even turns of the bus.
The United States Department of Education (ED) has formally kicked off a new competition designed to encourage the development of virtual and augmented reality concepts for education.
Dubbed the EdSim Challenge, the competition is aimed squarely at developing students’ career and technical skills — it’s funded through the Carl D. Perkins Career and Technical Education Act of 2006 — and calls on developers and ed tech organizations to develop concepts for “computer-generated virtual and augmented reality educational experiences that combine existing and future technologies with skill-building content and assessment. Collaboration is encouraged among the developer community to make aspects of simulations available through open source licenses and low-cost shareable components. ED is most interested in simulations that pair the engagement of commercial games with educational content that transfers academic, technical, and employability skills.”
Virtual reality boosts students’ results— from raconteur.net by Edwin Smith Virtual and augmented reality can enable teaching and training in situations which would otherwise be too hazardous, costly or even impossible in the real world
Excerpt:
More recently, though, the concept described in Aristotle’s Nicomachean Ethics has been bolstered by further scientific evidence. Last year, a University of Chicago study found that students who physically experience scientific concepts, such as the angular momentum acting on a bicycle wheel spinning on an axel that they’re holding, understand them more deeply and also achieve significantly improved scores in tests.
Virtual and augmented reality are shaking up sectors — from raconteur.net by Sophie Charara Both virtual and augmented reality have huge potential to leap from visual entertainment to transform the industrial and service sectors
Microsoft might not have envisioned its HoloLens headset as a war helmet, but that’s not stopping Ukrainian company LimpidArmor from experimenting. Defence Blog reports that LimpidArmor has started testing military equipment that includes a helmet with Microsoft’s HoloLens headset integrated into it.
The helmet is designed for tank commanders to use alongside a Circular Review System (CRS) of cameras located on the sides of armored vehicles. Microsoft’s HoloLens gathers feeds from the cameras outside to display them in the headset as a full 360-degree view. The system even includes automatic target tracking, and the ability to highlight enemy and allied soldiers and positions.
Bring your VR to work— from itproportal.com by Timo Elliott, Josh Waddell 4 hours ago With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer.
Excerpt:
With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer — and that’s a blind spot that companies and CIOs can’t afford to have. It hasn’t been that long since consumer demand for the iPhone and iPad forced companies, grumbling all the way, into finding business cases for them. Gartner has said that the next five to ten years will bring “transparently immersive experiences” to the workplace. They believe this will introduce “more transparency between people, businesses, and things” and help make technology “more adaptive, contextual, and fluid.”
If digitally enhanced reality generates even half as much consumer enthusiasm as smartphones and tablets, you can expect to see a new wave of consumerisation of IT as employees who have embraced VR and AR at home insist on bringing it to the workplace. This wave of consumerisation could have an even greater impact than the last one. Rather than risk being blindsided for a second time, organisations would be well advised to take a proactive approach and be ready with potential business uses for VR and AR technologies by the time they invade the enterprise.
In Gartner’s latest emerging technologies hype cycle, Virtual Reality is already on the Slope of Enlightenment, with Augmented Reality following closely.
One place where students are literally immersed in VR is at Carnegie Mellon University’s Entertainment Technology Center (ETC). ETC offers a two-year Master of Entertainment Technology program (MET) launched in 1998 and cofounded by the late Randy Pausch, author of “The Last Lecture.”
MET starts with an intense boot camp called the “immersion semester” in which students take a Building Virtual Worlds (BVW) course, a leadership course, along with courses in improvisational acting, and visual storytelling. Pioneered by Pausch, BVW challenges students in small teams to create virtual reality worlds quickly over a period of two weeks, culminating in a presentation festival every December.
Apple patents augmented reality mapping system for iPhone — from appleinsider.com by Mikey Campbell Apple on Tuesday was granted a patent detailing an augmented reality mapping system that harnesses iPhone hardware to overlay visual enhancements onto live video, lending credence to recent rumors suggesting the company plans to implement an iOS-based AR strategy in the near future.
Virtual reality is a combination of life-like images, effects and sounds that creates an imaginary world in front of our eyes.
But what if we could also imitate more complex sensations like the feeling of falling rain, a beating heart or a cat walking? What if we could distinguish, between a light sprinkle and a heavy downpour in a virtual experience?
Disney Research?—?a network of research laboratories supporting The Walt Disney Company, has announced the development of a 360-degree virtual reality application offering a library of feel effects and full body sensations.
Literature class meets virtual reality — from blog.cospaces.io by Susanne Krause Not every student finds it easy to let a novel come to life in their imagination. Could virtual reality help? Tiffany Capers gave it a try: She let her 7th graders build settings from Lois Lowry’s “The Giver” with CoSpaces and explore them in virtual reality. And: they loved it.
After generations of peering into a microscope to examine cells, scientists could simply stroll straight through one.
Calling his project the “stuff of science fiction,” director of the 3D Visualisation Aesthetics Lab at the University of New South Wales (UNSW) John McGhee is letting people come face-to-face with a breast cancer cell.
In contrast, VR has been described as the “ultimate empathy machine.” It gives us a way to virtually put us in someone else’s shoes and experience the world the way they do.
This isn’t meant to be an exhaustive list, but if I missed something major, please tell me and I’ll add it. Also, please reach out if you’re working on anything cool in this space à sarah(at)accomplice(dot)co.
Hand and finger tracking, gesture interfaces, and grip simulation:
From DSC: Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)!
The educational benefits — as well as the business/profit-related benefits will certainly be significant!
For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices.(Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)
Some use cases for such an app:
Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!
They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.
In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc. in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.
Or let’s look at the potential uses of this type of app from some different angles.
Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have anyEastern Poison Ivyin it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivyfor you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).
Or consider another use of such an app:
A homeowner who wants to get rid of a certain kind of weed. The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.
Or consider another use of such an app:
A homeowner has a diseased tree, and they want to know what to do about it.The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.
Or consider other/similar apps along these lines:
Skin ML (for detecting any issues re: acme, skin cancers, etc.)
Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
Fish ML
Etc.
Image from gettyimages.com
So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.
Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.
Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.
The headset and the Google Daydream platform will launch in November.
While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”
Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.
Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.
Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.
Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.
You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.
On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.
…
Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.
A comment from DC: More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?
Natural language processing, cognitive computing, and artificial intelligence continue their march forward.
From DSC: I have attended theNext Generation Learning Spaces Conferencefor the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.
For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.
The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.
Key takeaways for the panel discussion:
Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
An update on the state of the approaching ed tech landscape
Creative, new thinking: What might our next generation learning environments look like in 5-10 years?
I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check outthe conferenceandregister soon to take advantage of the early bird discounts.
Virtual reality technology holds enormous potential to change the future for a number of fields, from medicine, business, architecture to manufacturing.
Psychologists and other medical professionals are using VR to heighten traditional therapy methods and find effective solutions for treatments of PTSD, anxiety and social disorders. Doctors are employing VR to train medical students in surgery, treat patients’ pains and even help paraplegics regain body functions.
In business, a variety of industries are benefiting from VR. Carmakers are creating safer vehicles, architects are constructing stronger buildings and even travel agencies are using it to simplify vacation planning.
Google has unveiled a new interactive online exhibit that take users on a tour of 10 Downing street in London — home of the U.K. Prime Minister.
The building has served as home to countless British political leaders, from Winston Churchill and Margaret Thatcher through to Tony Blair and — as of a few months ago — Theresa May. But, as you’d expect in today’s security-conscious age, gaining access to the residence isn’t easy; the street itself is gated off from the public. This is why the 10 Downing Street exhibit may capture the imagination of politics aficionados and history buffs from around the world.
The tour features 360-degree views of the various rooms, punctuated by photos and audio and video clips.
In a slightly more grounded environment, the HoloLens is being used to assist technicians in elevator repairs.
Traversal via elevator is such a regular part of our lifestyles, its importance is rarely recognized…until they’re not working as they should be. ThyssenKrupp AG, one of the largest suppliers for elevators, recognizes how essential they are as well as how the simplest malfunctions can deter the lives of millions. Announced on their blog, Microsoft is partnering with Thyssenkrupp to equip 24,000 of their technicians with HoloLens.
Insert from DSC re: the above piece re: HoloLens:
Will technical communicators need to augment their skillsets? It appears so.
But in a world where no moment is too small to record with a mobile sensor, and one in which time spent in virtual reality keeps going up, interesting parallels start to emerge with our smartphones and headsets.
Let’s look at how the future could play out in the real world by observing three key drivers: VR video adoption, mobile-video user needs and the smartphone camera rising tide.
“Individuals with autism may become overwhelmed and anxious in social situations,” research clinician Dr Nyaz Didehbani said.
“The virtual reality training platform creates a safe place for participants to practice social situations without the intense fear of consequence,” said Didehbani.
The participants who completed the training demonstrated improved social cognition skills and reported better relationships, researchers said.
From DSC: Interactive video — a potentially very powerful medium to use, especially for blended and online-based courses or training-related materials! This interactive piece from Heineken is very well done, even remembering how you answered and coming up with their evaluation of you from their 12-question “interview.”
But notice again, a TEAM of specialists are needed to create such a piece. Neither a faculty member, a trainer, nor an instructional designer can do something like this all on their own. Some of the positions I could imagine here are:
Script writer(s)
Editor(s)
Actors and actresses
Those skilled in stage lighting and sound / audio recording
Digital video editors
Programmers
Graphic designers
Web designers
Producers
Product marketers
…and perhaps others
This is the kind of work that I wish we saw more of in the world of online and blended courses! Also, I appreciated their use of humor. Overall, a very engaging, fun, and informative piece!
Virtual reality: The hype, the problems and the promise — from bbc.com by Tim Maughan It’s the technology that is supposed to be 2016’s big thing, but what iteration of VR will actually catch on, and what’s just a fad? Tim Maughan takes an in-depth look.
Excerpt:
For Zec this is one of VR’s most promising potentials – to be able to drop audiences into a situation and force them to react emotionally, in ways that traditional filmmaking or journalism might struggle to do. “We really cannot understand what the people [in Syria and other places] right now are going through, so I thought maybe if we put the viewer inside the shoes of the family, or near them, maybe they can feel more and understand more rather than just reading a headline in a newspaper.”
The aim of Blackout is to challenge assumptions New Yorkers might have about the people around them, by allowing them to tap directly into their thoughts. “You’re given the ability to pick into people’s minds and their motives,” says co-creator Alex Porter. “Through that process you start to realise the ways in which you were wrong about all the people around you, and start to find these kind of exciting stories that they have to tell.”
From DSC: Virtual Reality could have a significant impact in diversity training. (I don’t like the word diversity too much; as in my experience, everybody in the Fortune 5oo companies where I worked belonged in the realm of diversity except for Christians, but I’ll use it here anyway.)
The point is…when you can put yourself into someone else’s shoes, it could have some positive impact in terms of better being able to relate to what that person is going through.
What if there was a new way to start this journey? What if you walked into the room and boarded a starship instead? What would a school experience be like if we sent our students on a mission, joining a global team to learn and solve our world’s most pressing problems? What if they met in Virtual Reality? For example, literally experiencing the streets of Paris if they were studying French culture or urban planning. Examining first hand the geology of volcanoes or building the next generation transportation? What would happen if they are given a problem they could not answer on their own, a problem that requires collaboration and teamwork with colleagues to find a solution?
Here is how VR and AI can empower the future of learning. The Star Trek: Bridge Crew VR Game gives us a glimpse of how we can engage with our students. Or, as Levar Burton (Geordi La Forge from Engineering) in the video trailer puts it:
There is something different being in a shared virtual environment . . . The team does not succeed unless everybody does their job well.
…
In the true spirit of Star Trek it is through cooperation rather than competition that we learn best. In VR, you can sit on any of the crew chairs and be the captain, engineer, or doctor and experience events from very different point of views. In Star Trek: Bridge Crew, you are flying the ship but have to work collaboratively with your team. You have to work with your crew to reach goals and accomplish the mission as this is virtual reality as a social experience. It demands that you be fully engaged.
Think “virtual reality,” and it’s probably video gaming that comes to mind. CSU is looking to expand the breadth and depth of this emerging technical field with a campus-wide Virtual Reality Initiative, launching this semester.
The initiative will give students and the science community hands-on experience with virtual reality, for research and educational applications.
Virtual reality (VR) is a way of experiencing virtual worlds or objects – the cockpit of a spaceship, an anatomy lesson, a walk through a historical building – through devices like computers, goggles or headsets designed to immerse someone in a simulated environment. VR touches fields ranging from design to art to engineering.
In case you haven’t heard, there is a lot of hype right now about virtual and augmented reality. Three months into 2016, investors have already spent 1.1 billion dollars to get a piece of the action.
Still, I am confident that virtual reality will revolutionize how we learn, and the reason is simple. Virtual reality is not just a technology, it’s a medium. And I’ve seen how powerful that medium can be.
A Chinese surgeon has discovered a practical application for augmented reality in the medical field. Using the same technology by which a Pokemon character is layered onto a real-life setting, two surgical images can be combined into a single view, eliminating the need for surgeons to watch two separate screens simultaneously.
Catherine Chan Po-ling, a surgeon in Hong Kong and co-founder of MedEXO Robotics, says that the use of augmented reality technology in keyhole, or minimally invasive, surgery can solve one of the biggest problems for surgeons performing these procedures.
Currently, surgeons in keyhole procedures must create and view two images simultaneously. In Chan’s example, when checking for cancerous cells in the liver, the surgeon operates a regular camera showing a view of the surface of the liver, and at the same time operates an ultrasound probe to check beneath the surface of the liver.
Given the explosion of interest in virtual reality among media organizations, we sought in January to establish best practices and ideal scenarios for using the technology in storytelling through our inaugural immersive journalism class at Stanford University.
During the 10-week course, 12 undergraduate and graduate students evaluated a range of virtual reality experiences published by the New York Times, Wall Street Journal, ABC News and others. We compared commercially available virtual reality headsets (Google Cardboard, HTC Vive, Samsung Gear/VR and Oculus Rift) for ease and quality as well as virtual reality cameras — the (more expensive but expansive) GoPro and the (more affordable) Ricoh Theta S.
Virtual reality used to be the thing of science fiction books and movies. Now, it’s inexpensive, works with the technology we carry in our pockets, and can transform us to real and imaginary places.
The first truly awesome chatbot is a talking T. Rex — from fastcodesign.com by John Brownlee National Geographic uses a virtual Tyrannosaur to teach kids about dinosaurs—and succeeds where other chatbots fail.
Excerpt:
As some have declared chatbots to be the “next webpage,” brands have scrambled to develop their own talkative bots, letting you do everything from order a pizza to rewrite your resume. The truth is, though, that a lot of these chatbots are actually quite stupid, and tend to have a hard time understanding natural human language. Sooner or later, users get frustrated bashing their heads up against the wall of a dim-witted bot’s AI.
So how do you design around a chatbot’s walnut-sized brain? If you’re National Geographic Kids UK, you set your chatbot to the task of pretending to be a Tyrannosaurus rex, a Cretaceous-era apex predator that really had a walnut-sized brain (at least comparatively speaking).
She’s called Tina the T. rex, and by making it fun to learn about dinosaurs, she suggests that education — rather than advertising or shopping — might be the real calling of chatbots.