This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.
Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more.
How many of us have those kinds of skills?Did we get that training in the community colleges, colleges, and universities that we went to?Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace.
This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.
This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies.
* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:
IBM: Vice President, Data, AI & Offering Strategy, HR
Kraft Heinz Senior Vice President Global HR, Performance and IT
SunTrust Senior Vice President Employee Wellbeing & Benefits
What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.
An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy
Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.
Mae Jemison, the first woman of color to go into space, stood in the center of the room and prepared to become digital. Around her, 106 cameras captured her image in 3-D, which would later render her as a life-sized hologram when viewed through a HoloLens headset.
Jemison was recording what would become the introduction for a new exhibit at the Intrepid Sea, Air, and Space Museum, which opens tomorrow as part of the Smithsonian’s annual Museum Day. In the exhibit, visitors will wear HoloLens headsets and watch Jemison materialize before their eyes, taking them on a tour of the Space Shuttle Enterprise—and through space history. They’re invited to explore artifacts both physical (like the Enterprise) and digital (like a galaxy of AR stars) while Jemison introduces women throughout history who have made important contributions to space exploration.
Interactive museum exhibits like this are becoming more common as augmented reality tech becomes cheaper, lighter, and easier to create.
Using either an Oculus Go standalone device or a mobile Gear VR headset, users will be able to login to the Oculus Venues app and join other users for an immersive live stream of various developer keynotes and adrenaline-pumping esports competitions.
From DSC: What are the ramifications of this for the future of webinars, teaching and learning, online learning, MOOCs and more…?
Apple’s iOS 12 has finally landed. The big update appeared for everyone on Monday, Sept. 17, and hiding within are some pretty amazing augmented reality upgrades for iPhones, iPads, and iPod touches. We’ve been playing with them ever since the iOS 12 beta launched in June, and here are the things we learned that you’ll want to know about.
For now, here’s everything AR-related that Apple has included in iOS 12. There are some new features aimed to please AR fanatics as well as hook those new to AR into finally getting with the program. But all of the new AR features rely on ARKit 2.0, the latest version of Apple’s augmented reality framework for iOS.
In a pilot program at Berkeley College, members of a Virtual Reality Faculty Interest Group tested the use of virtual reality to immerse students in a variety of learning experiences. During winter 2018, seven different instructors in nearly as many disciplines used inexpensive Google Cardboard headsets along with apps on smartphones to virtually place students in North Korea, a taxicab and other environments as part of their classwork.
Participants used free mobile applications such as Within, the New York Times VR, Discovery VR, Jaunt VR and YouTube VR. Their courses included critical writing, international business, business essentials, medical terminology, international banking, public speaking and crisis management.
Launched at the Online News Association conference in Austin, Texas, the Future Today Institute’s new industry report for the future of journalism, media and technology follows the same approach as our popular annual mega trends report, now in its 11th year with more than 7.5 million cumulative views.
Key findings:
Blockchain emerged as a significant driver of change in 2019 and beyond. The blockchain ecosystem is still maturing, however we’ve now seen enough development, adoption and consolidation that it warrants its own, full section. There are numerous opportunities for media and journalism organizations. For that reason, we’ve included an explainer, a list of companies to watch, and a cross-indexed list of trends to compliment blockchain technology. We’ve also included detailed scenarios in this section.
Mixed Reality is entering the mainstream.
The mixed reality ecosystem has grown enough that we now see concrete opportunities on the horizon for media organizations. From immersive video to wearable technology, news and entertainment media organizations should begin mapping their strategy for new kinds of devices and platforms.
Artificial Intelligence is not a tech trend—it is the third era of computing. And it isn’t just for story generation. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers and teams familiarize themselves with current and emerging AI trends.
In addition to the 108 trends identified, the report also includes several guides for journalists, including a Blockchain Primer, an AI Primer, a mixed reality explainer, hacker terms and lingo, and a guide to policy changes on the horizon.
The report also includes guidance on how everyone working within journalism and media can take action on tech trends and how to evaluate a trend’s impact on their local marketplaces.
Last year Amazon announced a new feature for its Amazon Web Services called Amazon Sumerian, a platform that would allow anyone to create full-featured virtual reality experiences. The Royal Melbourne Institute of Technology (RMIT) have now announced that it will be offering short courses in artificial intelligence (AI), augmented reality (AR) and VR thanks to a partnership with Amazon Web Services.
The new partnership between RMIT and Amazon Web Services was announced at the AWS Public Sector Summit and Canberra, Australia on Wednesday. The new courses will be using the Amazon Sumerian platform.
The newly launched courses includes Developing AI Strategy, Developing AR and VR Strategy and Developing AR and VR Applications. All of these have been adapted from the AWS Educate program, which was created to react to the changing nature of the workplace, and how immersive technology is increasingly relevant.
My team’s mission is to build a community of lifelong learners, successfully navigating the world of work … yes, sometimes your degree is the right solution education wise for a person, but throughout our lives, certainly I know in my digital career, constantly we need to be updating our skills and understand the new, emerging technology and talk with experts.”
Actually, make that three new iPhones. Apple followed last year’s iPhone X with the iPhone Xs, iPhone Xs Max, and iPhone Xr. It also spent some time showing off the Apple Watch Series 4, its most powerful wearable yet. Missed the event? Catch our commentary on WIRED’s liveblog, or read on for everything you need to know about today’s big Apple event.
At a glance the three new iPhones unveiled next to Apple’s glassy circular headquarters Wednesday look much like last year’s iPhone X. Inside, the devices’ computational guts got an invisible but more significant upgrade.
Apple’s phones come with new chip technology with a focus on helping the devices understand the world around them using artificial intelligence algorithms. The company says the improvements allow the new devices to offer slicker camera effects and augmented reality experiences.
For the first time, non-Apple developers will be allowed to run their own algorithms on Apple’s AI-specific hardware.
The new Apple Watch Series 4, revealed by Apple earlier today, underscores that some of the watch’s most important features are its health and fitness-tracking functions. The new watch is one of the first over-the-counter devices in the US to offer electrocardiogram, or ECG, readings. On top of that, the Apple Watch has received FDA clearance—both for the ECG feature and another new feature that detects atrial fibrillation.
London College of Communication, UAL is launching a Master of Arts in VR degree for the 2018-19 academic year.
The demand for VR professionals is growing in the film and media industries, where these technologies are being used most frequently.
From DSC: Collaboration/videoconferencing tools like Webex, Blackboard Collaborate, Zoom, Google Hangouts, Skype, etc. are being used in a variety of scenarios. That platform has been well established. It will be interesting to see how VR might play into web-based collaboration, training, and learning.
The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.
The open-source tool is free for download now on Oculus and Steam.
Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.
“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.
From DSC: While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.
Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.
Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way.
Immersive learning with VR experiences: Design learning scenarios that your learners can experience in Virtual Reality using VR headsets. Import 360° media assets and add hotspots, quizzes and other interactive elements to engage your learners with near real-life scenarios
Interactive videos: Liven up demos and training videos by making them interactive with the new Adobe Captivate. Create your own or bring in existing YouTube videos, add questions at specific points and conduct knowledge checks to aid learner remediation
Fluid Boxes 2.0: Explore the building blocks of Smart eLearning design with intelligent containers that use white space optimally. Objects placed in Fluid Boxes get aligned automatically so that learners always get fully responsive experience regardless of their device or browser.
360° learning experiences: Augment the learning landscape with 360° images and videos and convert them into interactive eLearning material with customizable overlay items such as information blurbs, audio content & quizzes.
Blippar unveils indoor visual positioning system to anchor AR — from martechtoday.com by Barry Levine Employing machine vision to recognize mapped objects, the company says it can determine which way a user is looking and can calculate positioning down to a centimeter.
To even scratch the surface of these questions, we need to better understand the audience’s experience in VR — not just their experience of the technology, but the way that they understand story and their role within it.
HoloLens technology is being paired with Microsoft’s Surface Hub, a kind of digital whiteboard. The idea is that the surgical team can gather together around a Surface Hub to review patient information, discuss the details of a procedure, and select what information should be readily accessible during surgery. During the procedure, a surgeon wearing a HoloLens would be able to review a CT or MRI scan, access other data in the electronic medical records, and to be able to manipulate these so as to get a clear picture of what is being worked on and what needs to be done.
The VR solution allows emergency medical services (EMS) personnel to dive into a rich and detailed environment which allows them to pinpoint portions of the body to dissect. This then allows them then see each part of the body in great detail along with viewing it from any angle. The goal is to allow for users to gain the experience to diagnose injuries from a variety of vantage points all where working within an virtual environment capable of displaying countless scenarios.
A team of Harvard University researchers recently achieved a major breakthrough in robotics, engineering a tiny spider robot using tech that could one day work inside your body to repair tissues or destroy tumors. Their work could not only change medicine–by eliminating invasive surgeries–but could also have an impact on everything from how industrial machines are maintained to how disaster victims are rescued.
Until now, most advanced, small-scale robots followed a certain model: They tend to be built at the centimeter scale and have only one degree of freedom, which means they can only perform one movement. Not so with this new ‘bot, developed by scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering, the John A. Paulson School of Engineering and Applied Sciences, and Boston University. It’s built at the millimeter scale, and because it’s made of flexible materials–easily moved by pneumatic and hydraulic power–the critter has an unprecedented 18 degrees of freedom.
I liked that it gave a new perspective to the video clip I’d watched: It threw the actual game up on the wall alongside the kind of information a basketball fan would want, including 3-D renderings and stats. Today, you might turn to your phone for that information. With Magic Leap, you wouldn’t have to.
…
Abovitz also said that intelligent assistants will play a big role in Magic Leap’s future. I didn’t get to test one, but Abovitz says he’s working with a team in Los Angeles that’s developing high-definition people that will appear to Magic Leap users and assist with tasks. Think Siri, Alexa or Google Assistant, but instead of speaking to your phone, you’d be speaking to a realistic-looking human through Magic Leap.Or you might be speaking to an avatar of someone real.
“You might need a doctor who can come to you,” Abovitz said. “AI that appears in front of you can give you eye contact and empathy.”
And I loved the idea of being able to place a digital TV screen anywhere I wanted.
December of last year U.S. startup Magic Leap unveiled its long-awaited mixed reality headset, a secretive device five years and $2.44B USD in the making.
This morning that same headset, now referred to as the Magic Leap One Creator Edition, became available for purchase in the U.S. On sale to creators at a hefty starting price of $2,275, the computer spatial device utilizes synthetic lightfields to capture natural lightwaves and superimpose interactive, 3D content over the real-world.
After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who’s spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I’ve had about the device.
UNIVERSITY PARK, Pa. — Penn State instructional designers are researching whether using virtual reality and 360-degree video can help students in online classes learn more effectively.
Designers worked with professors in the College of Nursing to incorporate 360-degree video into Nursing 352, a class on Advanced Health Assessment. Students in the class, offered online through Penn State World Campus, were offered free VR headsets to use with their smartphones to create a more immersive experience while watching the video, which shows safety and health hazards in a patient’s home.
Bill Egan, the lead designer for the Penn State World Campus RN to BSN nursing program, said students in the class were surveyed as part of a study approved by the Institutional Review Board and overwhelmingly said that they enjoyed the videos and thought they provided educational value. Eighty percent of the students said they would like to see more immersive content such as 360-degree videos in their online courses, he said.
In this post, we run through some practical stumbling blocks that prevent VR training from being feasible for most.
…
There are quite a number of practical considerations which prevent VR from totally overhauling the corporate training world. Some are obvious, whilst others only become apparent after using the technology a number of times. It’s important to be made aware of these limitations so that a large investment isn’t made in tech that isn’t really practical for corporate training.
Augmented reality – the next big thing for HR? — from hrdconnect.com Augmented reality (AR) could have a huge impact on HR, transforming long-established processes into engaging and exciting something. What will this look like? How can we shape this into our everyday working lives?
Excerpt (emphasis DSC):
AR also has the potential to revolutionise our work lives, changing the way we think about office spaces and equipment forever.
Most of us still commute to an office every day, which can be a time-consuming and stressful experience. AR has the potential to turn any space into your own customisable workspace, complete with digital notes, folders and files – even a digital photo of your loved ones. This would give you access to all the information and tools that you would typically find in an office, but wherever and whenever you need them.
And instead of working on a flat, stationary, two-dimensional screen, your workspace would be a customisable three-dimensional space, where objects and information are manipulated with gestures rather than hardware. All you would need is an AR headset.
AR could also transform the way we advertise brands and share information. Imagine if your organisation had an AR stand at a conference – how engaging would that be for potential customers? How much more interesting and fun would meetings be if we used AR to present information instead of slides on a projector?
AR could transform the on-boarding experience into something fun and interactive – imagine taking an AR tour of your office, where information about key places, company history or your new colleagues pops into view as you go from place to place.
A new project is aiming to make it easier for staff in airport control towers to visualize information to help make their job easier by leveraging augmented reality (AR) technology. The project, dubbed RETINA, is looking to modernise Europe’s air traffic management for safer, smarter and even smoother air travel.
What is on the five-year horizon for higher education institutions? Which trends and technology developments will drive educational change? What are the critical challenges and how can we strategize solutions? These questions regarding technology adoption and educational change steered the discussions of 71 experts to produce the NMC Horizon Report: 2018 Higher Education Edition brought to you by EDUCAUSE. This Horizon Report series charts the five-year impact of innovative practices and technologies for higher education across the globe. With more than 16 years of research and publications, the Horizon Project can be regarded as one of education’s longest-running explorations of emerging technology trends and uptake.
Six key trends, six significant challenges, and six developments in educational technology profiled in this higher education report are likely to impact teaching, learning, and creative inquiry in higher education. The three sections of this report constitute a reference and technology planning guide for educators, higher education leaders, administrators, policymakers, and technologists.
Overhyped by some, drastically underestimated by others, few emerging technologies have generated the digital ink like virtual reality (VR), augmented reality (AR), and mixed reality (MR). Still lumbering through the novelty phase and roller coaster-like hype cycles, the technologies are only just beginning to show signs of real world usefulness with a new generation of hardware and software applications aimed at the enterprise and at end users like you. On the line is what could grow to be a $108 billion AR/VR industry as soon as 2021. Here’s what you need to know.
The reason is that VR environments by nature demand a user’s full attention, which make the technology poorly suited to real-life social interaction outside a digital world. AR, on the other hand, has the potential to act as an on-call co-pilot to everyday life, seamlessly integrating into daily real-world interactions. This will become increasingly true with the development of the AR Cloud.
The AR Cloud Described by some as the world’s digital twin, the AR Cloud is essentially a digital copy of the real world that can be accessed by any user at any time.
For example, it won’t be long before whatever device I have on me at a given time (a smartphone or wearable, for example) will be equipped to tell me all I need to know about a building just by training a camera at it (GPS is operating as a poor-man’s AR Cloud at the moment).
What the internet is for textual information, the AR Cloud will be for the visible world. Whether it will be open source or controlled by a company like Google is a hotly contested issue.
Augmented reality will have a bigger impact on the market and our daily lives than virtual reality — and by a long shot. That’s the consensus of just about every informed commentator on the subject.
Despite all the hype in recent years about the potential for virtual reality in education, an emerging technology known as mixed reality has far greater promise in and beyond the classroom.
Unlike experiences in virtual reality, mixed reality interacts with the real world that surrounds us. Digital objects become part of the real world. They’re not just digital overlays, but interact with us and the surrounding environment.
If all that sounds like science fiction, a much-hyped device promises some of those features later this year. The device is by a company called Magic Leap, and it uses a pair of goggles to project what the company calls a “lightfield” in front of the user’s face to make it look like digital elements are part of the real world. The expectation is that Magic Leap will bring digital objects in a much more vivid, dynamic and fluid way compared to other mixed-reality devices such as Microsoft’s Hololens.
Now think about all the other things you wished you had learned this way and imagine a dynamic digital display that transforms your environment and even your living room or classroom into an immersive learning lab. It is learning within a highly dynamic and visual context infused with spatial audio cues reacting to your gaze, gestures, gait, voice and even your heartbeat, all referenced with your geo-location in the world. Unlike what happens with VR, where our brain is tricked into believing the world and the objects in it are real, MR recognizes and builds a map of your actual environment.
Also see:
virtualiteach.com Exploring The Potential for the Vive Focus in Education
On the big screen it’s become commonplace to see a 3D rendering or holographic projection of an industrial floor plan or a mechanical schematic. Casual viewers might take for granted that the technology is science fiction and many years away from reality. But today we’re going to outline where these sophisticated virtual replicas – Digital Twins – are found in the real world, here and now. Essentially, we’re talking about a responsive simulated duplicate of a physical object or system. When we first wrote about Digital Twin technology, we mainly covered industrial applications and urban infrastructure like transit and sewers. However, the full scope of their presence is much broader, so now we’re going to break it up into categories.
Digital twin refers to a digital replica of physical assets (physical twin), processes and systems that can be used for various purposes.[1] The digital representation provides both the elements and the dynamics of how an Internet of Things device operates and lives throughout its life cycle.[2]
Digital twins integrate artificial intelligence, machine learning and software analytics with data to create living digital simulation models that update and change as their physical counterparts change. A digital twin continuously learns and updates itself from multiple sources to represent its near real-time status, working condition or position. This learning system, learns from itself, using sensor data that conveys various aspects of its operating condition; from human experts, such as engineers with deep and relevant industry domain knowledge; from other similar machines; from other similar fleets of machines; and from the larger systems and environment in which it may be a part of. A digital twin also integrates historical data from past machine usage to factor into its digital model.
In various industrial sectors, twins are being used to optimize the operation and maintenance of physical assets, systems and manufacturing processes.[3] They are a formative technology for the Industrial Internet of Things, where physical objects can live and interact with other machines and people virtually.[4]
Walt Disney Animation Studio is set to debut its first VR short film, Cycles, this August in Vancouver, the Association for Computing Machinery announced today. The plan is for it to be a headliner at the ACM’s computer graphics conference (SIGGRAPH), joining other forms of VR, AR and MR entertainment in the conference’s designated Immersive Pavilion.
This film is a first for both Disney and its director, Jeff Gipson, who joined the animation team in 2013 to work as a lighting artist on films like Frozen, Zootopia and Moana. The objective of this film, Gipson said in the statement released by ACM, is to inspire a deep emotional connection with the story.
“We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story,” said Gipson.
Guiding faculty into immersive environments — from campustechnology.com by David Raths What’s the best way to get faculty to engage with emerging technologies and incorporate new learning spaces into their teaching? Five institutions share their experiences.
Excerpt:
One of the biggest hurdles for universities has been the high cost of VR-enabled computers and headsets, and some executives say prices must continue to drop before we’ll see more widespread usage. But John Bowditch, director of the Game Research and Immersive Design Lab at Ohio University’s Scripps College of Communication, is already seeing promising developments on that front as he prepares to open a new 20-seat VR classroom. “Probably the best thing about VR in 2018 is that it is a lot more affordable now and that democratizes it,” he said. “We purchased a VR helmet 13 years ago, and it was $12,000 just for the headset. The machine that ran it cost about $20,000. That would be a nonstarter beyond purchasing just one or two. Today, you can get a VR-enabled laptop and headset for under $2,000. That makes it much easier to think about integrating it into classes.”
Colleges and universities face several hurdles in getting faculty to incorporate virtual reality or immersive experiences in their courses. For one, instructional designers, instructional technologists and directors of teaching and learning centers may not have access to these tools yet, and the budgets aren’t always there to get the labs off the ground, noted Daniel Christian, instructional services director atWestern Michigan University‘s Cooley Law School. “Many faculty members’ job plates are already jam-packed — allowing little time to even look at emerging technologies,” he said. “Even if they wanted to experiment with such technologies and potential learning experiences, they don’t have the time to do so. Tight budgets are impacting this situation even further.”
Computing in the Camera — from blog.torch3d.com by Paul Reynolds Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create.
One of the points Allison [Woods, CEO, Camera IQ] made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it. Simple, obvious, straightforward, but not earth shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.
“[T]he camera will bring the internet and the real world into a single time and space.”
— Allison Woods, CEO, Camera IQ
The Camera As Platform — from shift.newco.co by Allison Wood When the operating system moves to the viewfinder, the world will literally change
“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”
The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Use cases abounded.
…
After traveling the country and most recently to Europe, I’ve now experienced almost every major VR/AR/MR/XR related conference out there. AWE’s exhibit area was by far the largest display of VR and AR companies to date (with the exception of CES).
Specifically, we explored the potential for how virtual reality can help create a more empathetic nurse, which, we hypothesize, will lead to increased development of nursing students’ knowledge, skills, and attitudes. We aim to integrate these virtual experiences into early program coursework, with the intent of changing nursing behavior by providing a deeper understanding of the patient’s perspective during clinical interactions.
…
In addition to these compelling student reflections and the nearly immediate change in reporting practice, survey findings show that students unanimously felt that this type of patient-perspective VR experience should be integrated and become a staple of the nursing curriculum. Seeing, hearing, and feeling these moments results in significant and memorable learning experiences compared to traditional classroom learning alone. The potential that this type of immersive experience can have in the field of nursing and beyond is only limited by the imagination and creation of other virtual experiences to explore. We look forward to continued exploration of the impact of VR on student learning and to establishing ongoing partnerships with developers.
Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.
To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.
Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.
The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.
With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.
That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”
To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.
There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.
The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.
I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.
Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.
…
VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.
David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.
These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.
London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.
Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.
A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.
Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.
Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.
At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.
Virtual reality (VR) has often been mentioned as the empathy machine, however it has many use cases. When it comes to memory and retention it looks like VR is not only useful for simulation but for education as well. Immersive VR Education teamed up with HTC Vive and Windsor Forest Colleges Group to create a memorable experience of virtual teaching.
On the April 25th ten students from Windsor Forest Colleges Group in the UK put on an HTC Vive headset and guided by David Whelan CEO & Founder of Immersive VR Education and Mike Armstrong, Senior/Lead Developer of Immersive VR Education using the free VR social education and presentation platform ENGAGE. ENGAGE allows users to hold meetings, classes, private lessons and presentations. Users can record, create their own lessons and presentations as well as allow users to interact with virtual objects.
The $200 Oculus Go is the most accessible VR headset today.
Up until now, one of the biggest barriers to entry for VR has been price. Headset adoption has taken a conservative growth path, mostly due in part to high prices of PC-required systems or just requiring consumers to own a specific line of VR compatible phones to pair with mobile headsets.
But now the Oculus Go is finally here and it’s a big deal, especially for the millions of iPhone users out there who up until today have had limited options to get into VR.
Starting today, the Oculus Go standalone VR headset is available for purchase for $199. Available for sale on Oculus.com in 23 countries, you can also pick up one online from Amazon or in Best Buy Stores in the U.S. The Oculus companion app used for initial setup is available for both iPhone or Android devices.
“You can see what you can’t imagine,” said Aaron Herridge, a graduate student in Creighton’s medical physics master’s program and a RaD Lab intern who is helping develop the lab’s virtual reality program. “It’s an otherworldly experience,” Herridge says. “But that’s the great plus of virtual reality. It can take you places that you couldn’t possibly go in real life. And in physics, we always say that if you can’t visualize it, you can’t do the math. It’s going to be a huge educational leap.”
“We’re always looking for ways to help students get the real feeling for astronomy,” Gabel said. “Visualizing space from another planet, like Mars, or from Earth’s moon, is a unique experience that goes beyond pencil and paper or a two-dimensional photograph in a textbook.
BAE created a guided step-by-step training solution for HoloLens to teach workers how to assemble a green energy bus battery.
From DSC: How long before items that need some assembling come with such experiences/training-related resources?
VR and AR: The Ethical Challenges Ahead— from er.educause.edu by Emory Craig and Maya Georgieva Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning to imagine.
Excerpt:
As immersive technologies become ever more realistic with graphics, haptic feedback, and social interactions that closely align with our natural experience, we foresee the ethical debates intensifying. What happens when the boundaries between the virtual and physical world are blurred? Will VR be a tool for escapism, violence, and propaganda? Or will it be used for social good, to foster empathy, and as a powerful new medium for learning?
Augmented reality might not be able to cure cancer (yet), but when combined with a machine learning algorithm, it can help doctors diagnose the disease. Researchers at Google have developed an augmented reality microscope (ARM) that takes real-time data from a neural network trained to detect cancerous cells and displays it in the field of view of the pathologist viewing the images.