The team at Google Spotlight Stories made history on Wednesday, as its short film Pearl became the first virtual reality project to be nominated for an Academy Award. But instead of serving as a capstone, the Oscar nod is just a nice moment at the beginning of the Spotlight team’s plan for the future of storytelling in the digital age.
…
Google Spotlight Stories are not exactly short films. Rather, they are interactive experiences created by the technical pioneers at Google’s Advanced Technologies and Projects (ATAP) division, and they defy expectations and conventions. Film production has in many ways been perfected, but for each Spotlight Story, the technical staff at Google uncovers new challenges to telling stories in a medium that blends together film, mobile phones, games, and virtual reality. Needless to say, it’s been an interesting road.
“The result is a really strong sense of presence,” said David Cole, who helped found NextVR as a 3D company in 2009. “A vivid sense.”
“In some ways, we could still be at a point in time where a lot of people don’t yet know that they want this in VR,” said David Cramer, NextVR’s chief operating officer. “The thing that we’ve seen is that when people do see it, it just blows away their expectations.”
From DSC: Hmm…the above piece from The Mercury News on#VRspeaks of presence. A vivid sense of presence.
If they can do this with an NBA game, why cant’ we do this with remote learners & bring them into face-to-face classrooms? How might VR be used in online learning and distance education? Could be an interesting new revenue stream for colleges and universities…and help serve more people who want to learn but might not be able to move to certain locations and/or not be able to attend face-to-face classrooms. Applications could exist within the corporate training/L&D world as well.
From DSC: Note this new type of Human Computer Interaction (HCI). I think that we’ll likely be seeing much more of this sort of thing.
Excerpt (emphasis DSC):
How is Hayo different?
AR that connects the magical and the functional:
Unlike most AR integrations, Hayo removes the screens from smarthome use and transforms the objects and spaces around you into a set of virtual remote controls.Hayo empowers you to create experiences that have previously been limited by the technology, but now are only limited by your imagination.
Screenless IoT:
The best interface is no interface at all. Aside from the one-time setup Hayo does not use any screens. Your real-life surfaces become the interface and you, the user, become the controls. Virtual remote controls can be placed wherever you want for whatever you need by simply using your Hayo device to take a 3D scan of your space.
Smarter AR experience:
Hayo anticipates your unique context, passive motion and gestures to create useful and more unique controls for the connected home. The Hayo system learns your behaviors and uses its AI to help meet your needs.
Apple’s (AAPL) upcoming iPhone 8 smartphone will include a 3D-sensing module to enable augmented-reality applications, Rosenblatt Securities analyst Jun Zhang said Wednesday. Apple has included the 3D-sensing module in all three current prototypes of the iPhone 8, which have screen sizes of 4.7, 5.1 and 5.5 inches, he said. “We believe Apple’s 3D sensing might provide a better user experience with more applications,” Zhang said in a research report. “So far, we think 3D sensing aims to provide an improved smartphone experience with a VR/AR environment.”
Apple’s iPhone 8 is expected to have 3D-sensing tech like Lenovo’s Phab 2 Pro smartphone. (Lenovo)
As we look forward to 2017 then, we’ve reached out to a bunch of industry experts and insiders to get their views on where we’re headed over the next 12 months.
2016 provided hints of where Facebook, HTC, Sony, Google, and more will take their headsets in the near future, but where does the industry’s best and brightest think we’ll end up this time next year? With CES, the year’s first major event, now in the books, let’s hear from some those that work with VR itself about what happens next.
We asked all of these developers the same four questions:
1) What do you think will happen to the VR/AR market in 2017? 2) What NEEDS to happen to the VR AR market in 2017? 3) What will be the big breakthroughs and innovations of 2017? 4) Will 2017 finally be the “year of VR?”
The MEL app turned my iPhone 6 into a virtual microscope, letting me walk through 360 degree, 3-D representations of the molecules featured in the experiment kits.
Labster is exploring new platforms by which students can access its laboratory simulations and is pleased to announce the release of its first Google Daydream-compatible virtual reality (VR) simulation, ‘Labster: World of Science’. This new simulation, modeled on Labster’s original ‘Lab Safety’ virtual lab, continues to incorporate scientific learning alongside of a specific context, enriched by story-telling elements. The use of the Google VR platform has enabled Labster to fully immerse the student, or science enthusiast, in a wet lab that can easily be navigated with intuitive usage of Daydream’s handheld controller.
Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things.
In November, we launched Daydream with the goal of bringing high quality, mobile VR to everyone. With the Daydream View headset and controller, and a Daydream-ready phone like the Pixel or Moto Z, you can explore new worlds, kick back in your personal VR cinema and play games that put you in the center of the action.
Daydream-ready phones are built for VR with high-resolution displays, ultra smooth graphics, and high-fidelity sensors for precise head tracking. To give you even more choices to enjoy Daydream, today we’re welcoming new devices that will soon join the Daydream-ready family.
Kessler Foundation, one of the largest public charities in the United States, is awarding a virtual reality training project to support high school students with disabilities. The foundation is providing a two-year, $485,000 Signature Employment Grant to the University of Michigan in Ann Arbor, to launch the Virtual Reality Job Interview Training program. Kessler Foundation says, the VR program will allow for highly personalized role-play, with precise feedback and coaching that may be repeated as often as desired without fear or embarrassment.
Deep-water safety training goes virtual — from shell.com by Soh Chin Ong How a visit to a shopping centre led to the use of virtual reality safety training for a new oil production project, Malikai, in the deep waters off Sabah in Malaysia.
ISNS students embrace learning in a world of virtual reality — from by
Excerpt (emphasis DSC):
To give students the skills needed to thrive in an ever more tech-centred world, the International School of Nanshan Shenzhen (ISNS) is one of the world’s first educational facilities now making instruction in virtual reality (VR) and related tools a key part of the curriculum.
Building on a successful pilot programme last summer in Virtual Reality, 3D art and animation, the intention is to let students in various age groups experiment with the latest emerging technologies, while at the same time unleashing their creativity, curiosity and passion for learning.
To this end, the school has set up a special VR innovation lab, conceived as a space for exploration, design and interdisciplinary collaboration involving a number of different subject teachers.
Using relevant software and materials, students learn to create high-quality digital content and to design “experiences” for VR platforms. In this “VR Lab makerspace” – a place offering the necessary tools, resources and support – they get to apply concepts and theories learned in the classroom, develop practical skills, document their progress, and share what they have learned with classmates and other members of the tech education community.
As a next logical step, she is also looking to develop contacts with a number of the commercial makerspaces which have sprung up in Shenzhen. The hope is that students will then be able to meet engineers working on cutting-edge innovations and understand the latest developments in software, manufacturing, and areas such as laser cutting, and 3D printing, and rapid prototyping.
AREA: How would you describe the opportunity for Augmented Reality in 2017? SAM MURLEY: I think it’s huge — almost unprecedented — and I believe the tipping point will happen sometime this year. This tipping point has been primed over the past 12 to 18 months with large investments in new startups, successful pilots in the enterprise, and increasing business opportunities for providers and integrators of Augmented Reality. During this time, we have witnessed examples of proven implementations – small scale pilots, larger scale pilots, and companies rolling out AR in production — and we should expect this to continue to increase in 2017. You can also expect to see continued growth of assisted reality devices, scalable for industrial use cases such as manufacturing, industrial, and services industries as well as new adoption of mixed reality and augmented reality devices, spatially-aware and consumer focused for automotive, consumer, retail, gaming, and education use cases. We’ll see new software providers emerge, existing companies taking the lead, key improvements in smart eyewear optics and usability, and a few strategic partnerships will probably form.
AREA: Do you have visibility into all the different AR pilots or programs that are going on at GE? SAM MURLEY:
…
At the 2016 GE Minds + Machines conference, our Vice President of GE Software Research, Colin Parris, showed off how the Microsoft HoloLens could help the company “talk” to machines and service malfunctioning equipment. It was a perfect example of how Augmented Reality will change the future of work, giving our customers the ability to talk directly to a Digital Twin — a virtual model of that physical asset — and ask it questions about recent performance, anomalies, potential issues and receive answers back using natural language. We will see Digital Twins of many assets, from jet engines to or compressors. Digital Twins are powerful – they allow tweaking and changing aspects of your asset in order to see how it will perform, prior to deploying in the field. GE’s Predix, the operating system for the industrial Internet, makes this cutting-edge methodology possible. “What you saw was an example of the human mind working with the mind of a machine,” said Parris. With Augmented Reality, we are able to empower the workforce with tools that increase productivity, reduce downtime, and tap into the Digital Thread and Predix. With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.
From DSC: I also believe that the tipping point will happen sometime this year. I hadn’t heard of the concept of a Digital Twin — but I sense that we’ll be hearing that more often in the future.
With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.
From DSC:
I then saw the concept of the “Digital Twin” again out at:
Breaking through the screen — from medium.com by Evan Helda Excerpt (emphasis DSC ):
Within the world of the enterprise, this concept of a simultaneous existence of “things” virtually and physically has been around for a while. It is known as the “digital twin”, or sometimes referred to as the “digital tapestry” (will cover this topic in a later post). Well, thanks to the internet and ubiquity of sensors today, almost every “thing” now has a “digital twin”, if you will. These “things” will embody this co-existence, existing in a sense virtually and physically, and all connected in a myriad of ways. The outcome at maturity is something we’ve yet to fully comprehend.
From DSC: The following article reminded me of a vision that I’ve had for the last few years…
How to Build a Production Studio for Online Courses— from campustechnology.com by Dian Schaffhauser At the College of Business at the University of Illinois, video operations don’t come in one size. Here’s how the institution is handling studio setup for MOOCs, online courses, guest speakers and more.
Though I’m a huge fan of online learning, why only build a production studio that’s meant to support online courses only? Let’s take it a step further and design a space that can address the content development for online learning as well as for blended learning — which can include the flipped classroom type of approach.
To do so, colleges and universities need to build something akin to what the National University of Singapore has done. I would like to see institutions create large enough facilities in order to house multiple types of recording studios in each one of them. Each facility would feature:
One room that has a lightboard and a mobile whiteboard in it — let the faculty member choose which surface that they want to use
Another room that has a Microsoft Surface Hub or a similar interactive, multitouch device
A recording booth with a nice, powerful, large iMac that has ScreenFlow on it. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.
Another recording booth with a PC and Adobe Captivate, Camtasia Studio, Screencast-O-Matic, or similar tools. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.
Another recording booth with an iPad tablet and apps loaded on it such asExplain Everything:
A large recording studio that is similar to what’s described inthe article— a room that incorporates a full-width green screen, with video monitors, a tablet, a podium, several cameras, high-end mics and more. Or, if the budget allows for it, a really high end broadcasting/recording studiolike what Harvard Business school is using:
The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)
The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)
Also see:
The Lounge enabled by Samsung Open day and night, The Lounge enabled by Samsung is a new place in the heart of the Opera House where people can sit and enjoy art and culture through the latest technology. The most recent in a series of future-facing projects enabled by Sydney Opera House’s Principal Partner, Samsung, the new visitor lounge features stylish, comfortable seating, as well as interactive displays and exclusive digital content, including:
The Sails – a virtual-reality experience of what it’s like to stand atop the sails of Australia’s most famous building, brought to you via Samsung Gear VR;
Digital artwork – a specially commissioned video exploration of the Opera House and its stories, produced by creative director Sam Doust. The artwork has been themed to match the time of day and is the first deployment of Samsung’s latest Smart LED Display panel technology in Australia; and
Google Cultural Institute – available to view on Samsung Galaxy View and Galaxy Tab S2 tablets, the digital collection features 50 online exhibits that tell the story of the Opera House’s past, present and future through rare archival photography, celebrated performances, early architectural drawings and other historical documents, little-known interviews and Street View imagery.
CES 2017: Intel’s VR visions — from jwtintelligence.com by Shepherd Laughlin The company showed off advances in volumetric capture, VR live streaming, and “merged reality.”
Excerpt (emphasis DSC):
Live-streaming 360-degree video was another area of focus for Intel. Guests were able to watch a live basketball game being broadcast from Indianapolis, Indiana, choosing from multiple points of view as the action moved up and down the court.Intel “will be among the first technology providers to enable the live sports experience on multiple VR devices,” the company stated.
After taking a 3D scan of the room, Project Alloy can substitute virtual objects where physical objects stand.
From DSC: If viewers of a live basketball game can choose from multiple points of view, why can’t remote learners do this as well with a face-to-face classroom that’s taking place at a university or college? Learning from the Living [Class] Room.
Data visualization, guided work instructions, remote expert — for use in a variety of industries: medical, aviation and aerospace, architecture and AEC, lean manufacturing, engineering, and construction.
The company said that it is teaming up with the likes of Dell, HP, Lenovo and Acer, which will release headsets based on the HoloLens technology. “These new head-mounted displays will be the first consumer offerings utilizing the Mixed Reality capabilities of Windows 10 Creators Update,” a Microsoft spokesperson said. Microsoft’s partner companies for taking the HoloLens technology forward include Dell, HP, Lenovo, Acer, and 3 Glasses. Headsets by these manufacturers will work the same way as the original HoloLens but carry the design and branding of their respective companies. While the HoloLens developer edition costs a whopping $2999 (approximately Rs 2,00,000), the third-party headsets will be priced starting $299 (approximately Rs 20,000).
Verto Studio 3D App Makes 3D Modeling on HoloLens Easy— from winbuzzer.com by Luke Jones The upcoming Verto Studio 3D application allows users to create 3D models and interact with them when wearing HoloLens. It is the first software of its kind for mixed reality.
Excerpt: How is The Immersive Experience Delivered?
Tethered Headset VR – The user can participate in a VR experience by using a computer with a tethered VR headset (also known as a Head Mounted Display – HMD) like Facebook’s Oculus Rift, PlayStation VR, or the HTC Vive. The user has the ability to move freely and interact in the VR environment while using a handheld controller to emulate VR hands. But, the user has a limited area in which to move about because they are tethered to a computer.
Non-Tethered Headset VR/AR – These devices are headsets and computers built into one system, so users are free of any cables limiting their movement. These devices use AR to deliver a 360° immersive experience. Much like with Oculus Rift and Vive, the user would be able to move around in the AR environment as well as interact and manipulate objects. A great example of this headset is Microsoft’s HoloLens, which delivers an AR experience to the user through just a headset.
Mobile Device Inserted into a Headgear – To experience VR, the user inserts their mobile device into a Google Cardboard, Samsung Gear 360°, or any other type of mobile device headgear, along with headphones if they choose. This form of VR doesn’t require the user to be tethered to a computer and most VR experiences can be 360° photos, videos, and interactive scenarios.
Mobile VR – The user can access VR without any type of headgear simply by using a mobile device and headphones (optional). They can still have many of the same experiences that they would through Google Cardboard or any other type of mobile device headgear. Although they don’t get the full immersion that they would with headgear, they would still be able to experience VR. Currently, this version of the VR experience seems to be the most popular because it only requires a mobile device. Apps like Pokémon Go and Snapchat’s animated selfie lens only require a mobile device and have a huge number of users.
Desktop VR – Using just a desktop computer, the user can access 360° photos and videos, as well as other VR and AR experiences, by using the trackpad or computer mouse to move their field of view and become immersed in the VR scenario.
New VR – Non-mobile and non-headset platforms like Leap Motion use depth sensors to create a VR image of one’s hands on a desktop computer; they emulate hand gestures in real time. This technology could be used for anything from teaching assembly in a manufacturing plant to learning a step-by-step process to medical training.
Goggles that are worn, while they are “Oh Myyy” awesome, will not be the final destination of VR/AR. We will want to engage and respond, without wearing a large device over our eyes. Pokémon Go was a good early predictor of how non-goggled experiences will soar.
Education will go virtual
Similar to VR for brand engagement, we’ve seen major potential for delivering hands-on training and distance education in a virtual environment. If VR can take a class on a tour of Mars, the current trickle of educational VR could turn into a flood in 2017.
Published on Dec 26, 2016
Top 10 Virtual Reality Predictions For 2017 In vTime. Its been an amazing year for VR and AR. New VR and AR headsets, ground breaking content and lots more. 2017 promises to be amazing as well. Here’s our top 10 virtual reality predictions for the coming year. Filmed in vTime with vCast. Sorry about the audio quality. We used mics on Rift and Vive which are very good on other platforms. We’ve reported this to vTime.
Addendums
5 top Virtual Reality and Augmented Reality technology trends for 2017— from marxentlabs.com by Joe Bardi Excerpt:
So what’s in store for Virtual Reality and Augmented Reality in 2017? We asked Marxent’s talented team of computer vision experts, 3D artists and engineers to help us suss out what the year ahead will hold. Here are their predictions for the top Virtual Reality and Augmented Reality technology trends for 2017.
A hybrid of both AR & VR, Mixed Reality (MR) is far more advanced than Virtual Reality because it combines the use of several types of technologies including sensors, advanced optics and next gen computing power. All of this technology bundled into a single device will provide the user with the capability to overlay augmented holographic digital content into your real-time space, creating scenarios that are unbelievably realistic and mind-blowing.
How does it work?
Mixed Reality works by scanning your physical environment and creating a 3D map of your surroundings so the device will know exactly where and how to place digital content into that space – realistically – while allowing you to interact with it using gestures. Much different than Virtual Reality where the user is immersed in a totally different world, Mixed Reality experiences invite digital content into your real-time surroundings, allowing you to interact with them.
Mixed reality use cases mentioned in the article included:
Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!
When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?
For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.
Some of the questions that come to my mind:
Why can’t there be more educationally-related games and apps available on this type of platform?
Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
When will potential employers start asking for access to such web-based learner profiles?
Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
Will virtual tutoring be one of the available apps/channels?
Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)
The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
Alexa, Tell Me Where You’re Going Next— from backchannel.com by Steven Levy Amazon’s VP of Alexa talks about machine learning, chatbots, and whether industry is strip-mining AI talent from academia.
Excerpt:
Today Prasad is giving an Alexa “State of the Union” address at the Amazon Web Services conference in Las Vegas, announcing an improved version of the Alexa Skills Kit, which helps developers create the equivalent of apps for the platform; a beefed-up Alexa Voice Service, which will make it easier to transform third-party devices like refrigerators and cars into Alexa bots; a partnership with Intel; and the Alexa Accelerator that, with the startup incubator Techstars, will run a 13-week program to help newcomers build Alexa skills. Prasad and Amazon haven’t revealed sales numbers, but industry experts have estimated that Amazon has sold over five million Echo devices so far.
Prasad, who joined Amazon in 2013, spent some time with Backchannel before his talk today to illuminate the direction of Alexa and discuss how he’s recruiting for Jeff Bezos’s arsenal without drying up the AI pipeline.
What DeepMind brings to Alphabet — from economist.com The AI firm’s main value to Alphabet is as a new kind of algorithm factory
Excerpt:
DeepMind’s horizons stretch far beyond talent capture and public attention, however. Demis Hassabis, its CEO and one of its co-founders, describes the company as a new kind of research organisation, combining the long-term outlook of academia with “the energy and focus of a technology startup”—to say nothing of Alphabet’s cash.
…
Were he to succeed in creating a general-purpose AI, that would obviously be enormously valuable to Alphabet. It would in effect give the firm a digital employee that could be copied over and over again in service of multiple problems. Yet DeepMind’s research agenda is not—or not yet—the same thing as a business model. And its time frames are extremely long.
Silicon Valley needs its next big thing, a focus for the concentrated brain power and innovation infrastructure that have made this region the world leader in transformative technology. Just as the valley’s mobile era is peaking, the next frontier of growth and innovation has arrived: It’s Siri in an Apple iPhone, Alexa in an Amazon Echo, the software brain in Google’s self-driving cars, Amazon’s product recommendations and, someday, maybe the robot surgeon that saves your life.
It’s artificial intelligence, software that can “learn” and “think,” the latest revolution in tech.
“It’s going to be embedded in everything,” said startup guru Steve Blank, an adjunct professor at Stanford. “We’ve been talking about artificial intelligence for 30 years, maybe longer, in Silicon Valley. It’s only in the last five years, or maybe even the last two years, that this stuff has become useful.”
Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably. They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.
…
In short, the best answer is that:
Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”.
And,
Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.
Yet, the truth is, we are far from achieving true AI — something that is as reactive, dynamic, self-improving and powerful as human intelligence.
…
Full AI, or superintelligence, should possess the full range of human cognitive abilities. This includes self-awareness, sentience and consciousness, as these are all features of human cognition.
Udacity is positioned perfectly to benefit from the rush on talent in a number of growing areas of interest among tech companies and startups. The online education platform has added 14 new hiring partners across its Artificial Intelligence Engineer, Self-Driving Car Engineer and Virtual Reality Developer Nanodegree programs, as well as in its Predictive Analytics Nanodegree, including standouts like Bosch, Harma, Slack, Intel, Amazon Alexa and Samsung.
That brings the total number of hiring partners for Udacity to over 30, which means a lot of potential soft landings for graduates of its nanodegree programs. The nanodegree offered by Udacity is its own original form of accreditation, which is based on a truncated field of study that spans months, rather than years, and allows students to direct the pace of their own learning. It also all takes place online, so students can potentially learn from anywhere.
The Great A.I. Awakening — from nytimes.com by Gideo Lewis-Kraus How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
Excerpt:
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
On [December 12th, 2016], Microsoft announced a new Microsoft Ventures fund dedicated to artificial intelligence (AI) investments, according to TechCrunch. The fund, part of the company’s investment arm that launched in May, will back startups developing AI technology and includes Element AI, a Montreal-based incubator that helps other companies embrace AI. The fund further supports Microsoft’s focus on AI. The company has been steadily announcing major initiatives in support of the technology. For example, in September, it announced a major restructuring and formed a new group dedicated to AI products. And in mid-November, it partnered with OpenAI, an AI research nonprofit backed by Elon Musk, to further its AI research and development efforts.
Whether Artificial Intelligence (AI) is something you’ve just come across or it’s something you’ve been monitoring for a while, there’s no denying that it’s starting to influence many industries. And one place that it’s really starting to change things is e-commerce. Below you’ll find some interesting stats and facts about how AI is growing in e-commerce and how it’s changing the way we do things. From personalizing the shopping experience for customers to creating personal buying assistants, AI is something retailers can’t ignore. We’ll also take a look at some examples of how leading online stores have used AI to enrich the customer buying experience.
Only 26 percent of computer professionals were women in 2013, according to a recent review by the American Association of University Women. That figure has dropped 9 percent since 1990.
Explanations abound. Some say the industry is masculine by design. Others claim computer culture is unwelcoming — even hostile — to women. So, while STEM fields like biology, chemistry, and engineering see an increase in diversity, computing does not. Regardless, it’s a serious problem.
Artificial intelligence is still in its infancy, but it’s poised to become the most disruptive technology since the Internet. AI will be everywhere — in your phone, in your fridge, in your Ford. Intelligent algorithms already track your online activity, find your face in Facebook photos, and help you with your finances. Within the next few decades they’ll completely control your car and monitor your heart health. An AI may one day even be your favorite artist.
The programs written today will inform the systems built tomorrow. And if designers all have one worldview, we can expect equally narrow-minded machines.
From DSC: Recently, my neighbor graciously gave us his old Honda snowblower, as he was getting a new one. He wondered if we had a use for it. As I’m definitely not getting any younger and I’m not Howard Hughes, I said, “Sure thing! That would be great — it would save my back big time! Thank you!” (Though the image below is not mine, it might as well be…as both are quite old now.)
Anyway…when I recently ran out of gas, I would have loved to be able to take out my iPhone, hold it up to this particular Honda snowblower and ask an app to tell me if this particular Honda snowblower takes a mixture of gas and oil, or does it have a separate container for the oil? (It wasn’t immediately clear where to put the oil in, so I’m figuring it’s a mix.)
But what I would have liked to have happen was:
I launched an app on my iPhone that featured machine learning-based capabilities
The app would have scanned the snowblower and identified which make/model it was and proceeded to tell me whether it needed a gas/oil mix (or not)
If there was a separate place to pour in the oil, the app would have asked me if I wanted to learn how to put oil in the snowblower. Upon me saying yes, it would then have proceeded to display an augmented reality-based training video — showing me where the oil was to be put in and what type of oil to use (links to local providers would also come in handy…offering nice revenue streams for advertisers and suppliers alike).
So several technologies would have to be involved here…but those techs are already here. We just need to pull them together in order to provide this type of useful functionality!
“Every child is a genius in his or her own way. VR can be the key to awakening the genius inside.”
This is the closing line of a new research study currently making its way out of China. Conducted by Beijing Bluefocus E-Commerce Co., Ltd and Beijing iBokan Wisdom Mobile Internet Technology Training Institution, the study takes a detailed look at the different ways virtual reality can make public education more effective.
“Compared with traditional education, VR-based education is of obvious advantage in theoretical knowledge teaching as well as practical skills training. In theoretical knowledge teaching, it boasts the ability to make abstract problems concrete, and theoretical thinking well-supported. In practical skills training, it helps sharpen students’ operational skills, provides an immersive learning experience, and enhances students’ sense of involvement in class, making learning more fun, more secure, and more active,” the study states.
CALIFORNIA — Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment [on 12/7/16] announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.
The goal of the Global Virtual Reality Association is to promote responsible development and adoption of VR globally. The association’s members will develop and share best practices, conduct research, and bring the international VR community together as the technology progresses. The group will also serve as a resource for consumers, policymakers, and industry interested in VR.
VR has the potential to be the next great computing platform, improving sectors ranging from education to healthcare, and contribute significantly to the global economy. Through research, international engagement, and the development of best practices, the founding companies of the Global Virtual Reality Association will work to unlock and maximize VR’s potential and ensure those gains are shared as broadly around the world as possible.
Occipital announced today that it is launching a mixed reality platform built upon its depth-sensing technologies called Bridge. The headset is available for $399 and starts shipping in March; eager developers can get their hands on an Explorer Edition for $499, which starts shipping next week.
From DSC: While I hope that early innovators in the AR/VR/MR space thrive, I do wonder what will happen if and when Apple puts out their rendition/version of a new form of Human Computer Interaction (or forms) — such as integrating AR-capabilities directly into their next iPhone.
Enterprise augmented reality applications ready for prime time — from internetofthingsagenda.techtarget.com by Beth Stackpole Pokémon Go may have put AR on the map, but the technology is now being leveraged for enterprise applications in areas like marketing, maintenance and field service.
Excerpt:
Unlike virtual reality, which creates an immersive, computer-generated environment, the less familiar augmented reality, or AR, technology superimposes computer-generated images and overlays information on a user’s real-world view. This computer-generated sensory data — which could include elements such as sound, graphics, GPS data, video or 3D models — bridges the digital and physical worlds. For an enterprise, the applications are boundless, arming workers walking the warehouse or selling on the shop floor, for example, with essential information that can improve productivity, streamline customer interactions and deliver optimized maintenance in the field.
2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year.
By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development.
VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.
IN an Australian first, education students will be able hone their skills without stepping foot in a classroom. Murdoch University has hosted a pilot trial of TeachLivE, a virtual reality environment for teachers in training.
The student avatars are able to disrupt the class in a range of ways that teachers may encounter such as pulling out mobile phones or losing their pen during class.
8 Cutting Edge Virtual Reality Job Opportunities— from appreal-vr.com by Yariv Levski Today we’re highlighting the top 8 job opportunities in VR to give you a current scope of the Virtual Reality job market.
The Epson Moverio BT-300, to give the smart glasses their full name, are wearable technology – lightweight, comfortable see-through glasses – that allow you to see digital data, and have a first person view (FPV) experience: all while seeing the real world at the same time. The applications are almost endless.
Volkswagen’s pivot away from diesel cars to electric vehicles is still a work in progress, but some details about its coming I.D. electric car — unveiled in Paris earlier this year — are starting to come to light. Much of the news is about an innovative augmented reality heads-up display Volkswagen plans to offer in its electric vehicles. Klaus Bischoff, head of the VW brand, says the I.D. electric car will completely reinvent vehicle instrumentation systems when it is launched at the end of the decade.
For decades, numerous research centers and academics around the world have been working the potential of virtual reality technology. Countless research projects undertaken in these centers are an important indicator that everything from health care to real estate can experience disruption in a few years.
…
Virtual Human Interaction Lab — Stanford University
Virtual Reality Applications Center — Iowa State University
Institute for Creative Technologies—USC
Medical Virtual Reality — USC
The Imaging Media Research Center — Korea Institute of Science and Technology
Virtual Reality & Immersive Visualization Group — RWTH Aachen University
Center For Simulations & Virtual Environments Research — UCIT
Duke immersive Virtual Environment —Duke University
Experimental Virtual Environments (EVENT) Lab for Neuroscience and Technology — Barcelona University
Immersive Media Technology Experiences (IMTE) — Norwegian University of Technology
Human Interface Technology Laboratory — University of Washington
Augmented Reality (AR) dwelled quietly in the shadow of VR until earlier this year, when a certain app propelled it into the mainstream. Now, AR is a household term and can hold its own with advanced virtual technologies. The AR industry is predicted to hit global revenues of $90 billion by 2020, not just matching VR but overtaking it by a large margin. Of course, a lot of this turnover will be generated by applications in the entertainment industry. VR was primarily created by gamers for gamers, but AR began as a visionary idea that would change the way that humanity interacted with the world around them. The first applications of augmented reality were actually geared towards improving human performance in the workplace… But there’s far, far more to be explored.
I stood at the peak of Mount Rainier, the tallest mountain in Washington state. The sounds of wind whipped past my ears, and mountains and valleys filled a seemingly endless horizon in every direction. I’d never seen anything like it—until I grabbed the sun.
Using my HTC Vive virtual reality wand, I reached into the heavens in order to spin the Earth along its normal rotational axis, until I set the horizon on fire with a sunset. I breathed deeply at the sight, then spun our planet just a little more, until I filled the sky with a heaping helping of the Milky Way Galaxy.
Virtual reality has exposed me to some pretty incredible experiences, but I’ve grown ever so jaded in the past few years of testing consumer-grade headsets. Google Earth VR, however, has dropped my jaw anew. This, more than any other game or app for SteamVR’s “room scale” system, makes me want to call every friend and loved one I know and tell them to come over, put on a headset, and warp anywhere on Earth that they please.
In VR architecture, the difference between real and unreal is fluid and, to a large extent, unimportant. What is important, and potentially revolutionary, is VR’s ability to draw designers and their clients into a visceral world of dimension, scale, and feeling, removing the unfortunate schism between a built environment that exists in three dimensions and a visualization of it that has until now existed in two.
Many of the VR projects in Architecture are focused on the final stages of design process, basically for selling a house to a client. Thomas sees the real potential in the early stages: when the main decisions need to be made. VR is so good for this, as it helps for non professionals to understand and grasp the concepts of architecture very intuitively. And this is what we talked mostly about.
A proposed benefit of virtual reality is that it could one day eliminate the need to move our fleshy bodies around the world for business meetings and work engagements. Instead, we’ll be meeting up with colleagues and associates in virtual spaces. While this would be great news for the environment and business people sick of airports, it would be troubling news for airlines.
Imagine during one of your future trials that jurors in your courtroom are provided with virtual reality headsets, which allow them to view the accident site or crime scene digitally and walk around or be guided through a 3D world to examine vital details of the scene.
How can such an evidentiary presentation be accomplished? A system is being developed whereby investigators use a robot system inspired by NASA’s Curiosity Mars rover using 3D imaging and panoramic videography equipment to record virtual reality video of the scene.6 The captured 360° immersive video and photographs of the scene would allow recreation of a VR experience with video and pictures of the original scene from every angle. Admissibility of this evidence would require a showing that the VR simulation fairly and accurately depicts what it represents. If a judge permits presentation of the evidence after its accuracy is established, jurors receiving the evidence could turn their heads and view various aspects of the scene by looking up, down, and around, and zooming in and out.
Unlike an animation or edited video initially created to demonstrate one party’s point of view, the purpose of this type of evidence would be to gather data and objectively preserve the scene without staging or tampering. Even further, this approach would allow investigators to revisit scenes as they existed during the initial forensic examination and give jurors a vivid rendition of the site as it existed when the events occurred.
The theme running throughout most of this year’s WinHEC keynote in Shenzhen, China was mixed reality. Microsoft’s Alex Kipman continues to be a great spokesperson and evangelist for the new medium, and it is apparent that Microsoft is going in deep, if not all in, on this version of the future. I, for one, as a mixed reality or bust developer, am very glad to see it.
As part of the presentation, Microsoft presented a video (see below) that shows the various forms of mixed reality. The video starts with a few virtual objects in the room with a person, transitions into the same room with a virtual person, then becomes a full virtual reality experience with Windows Holographic.