Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!
When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?
For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.
Some of the questions that come to my mind:
Why can’t there be more educationally-related games and apps available on this type of platform?
Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
When will potential employers start asking for access to such web-based learner profiles?
Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
Will virtual tutoring be one of the available apps/channels?
Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)
The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
Alexa, Tell Me Where You’re Going Next— from backchannel.com by Steven Levy Amazon’s VP of Alexa talks about machine learning, chatbots, and whether industry is strip-mining AI talent from academia.
Excerpt:
Today Prasad is giving an Alexa “State of the Union” address at the Amazon Web Services conference in Las Vegas, announcing an improved version of the Alexa Skills Kit, which helps developers create the equivalent of apps for the platform; a beefed-up Alexa Voice Service, which will make it easier to transform third-party devices like refrigerators and cars into Alexa bots; a partnership with Intel; and the Alexa Accelerator that, with the startup incubator Techstars, will run a 13-week program to help newcomers build Alexa skills. Prasad and Amazon haven’t revealed sales numbers, but industry experts have estimated that Amazon has sold over five million Echo devices so far.
Prasad, who joined Amazon in 2013, spent some time with Backchannel before his talk today to illuminate the direction of Alexa and discuss how he’s recruiting for Jeff Bezos’s arsenal without drying up the AI pipeline.
What DeepMind brings to Alphabet — from economist.com The AI firm’s main value to Alphabet is as a new kind of algorithm factory
Excerpt:
DeepMind’s horizons stretch far beyond talent capture and public attention, however. Demis Hassabis, its CEO and one of its co-founders, describes the company as a new kind of research organisation, combining the long-term outlook of academia with “the energy and focus of a technology startup”—to say nothing of Alphabet’s cash.
…
Were he to succeed in creating a general-purpose AI, that would obviously be enormously valuable to Alphabet. It would in effect give the firm a digital employee that could be copied over and over again in service of multiple problems. Yet DeepMind’s research agenda is not—or not yet—the same thing as a business model. And its time frames are extremely long.
Silicon Valley needs its next big thing, a focus for the concentrated brain power and innovation infrastructure that have made this region the world leader in transformative technology. Just as the valley’s mobile era is peaking, the next frontier of growth and innovation has arrived: It’s Siri in an Apple iPhone, Alexa in an Amazon Echo, the software brain in Google’s self-driving cars, Amazon’s product recommendations and, someday, maybe the robot surgeon that saves your life.
It’s artificial intelligence, software that can “learn” and “think,” the latest revolution in tech.
“It’s going to be embedded in everything,” said startup guru Steve Blank, an adjunct professor at Stanford. “We’ve been talking about artificial intelligence for 30 years, maybe longer, in Silicon Valley. It’s only in the last five years, or maybe even the last two years, that this stuff has become useful.”
Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably. They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.
…
In short, the best answer is that:
Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”.
And,
Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.
Yet, the truth is, we are far from achieving true AI — something that is as reactive, dynamic, self-improving and powerful as human intelligence.
…
Full AI, or superintelligence, should possess the full range of human cognitive abilities. This includes self-awareness, sentience and consciousness, as these are all features of human cognition.
Udacity is positioned perfectly to benefit from the rush on talent in a number of growing areas of interest among tech companies and startups. The online education platform has added 14 new hiring partners across its Artificial Intelligence Engineer, Self-Driving Car Engineer and Virtual Reality Developer Nanodegree programs, as well as in its Predictive Analytics Nanodegree, including standouts like Bosch, Harma, Slack, Intel, Amazon Alexa and Samsung.
That brings the total number of hiring partners for Udacity to over 30, which means a lot of potential soft landings for graduates of its nanodegree programs. The nanodegree offered by Udacity is its own original form of accreditation, which is based on a truncated field of study that spans months, rather than years, and allows students to direct the pace of their own learning. It also all takes place online, so students can potentially learn from anywhere.
The Great A.I. Awakening — from nytimes.com by Gideo Lewis-Kraus How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.
Excerpt:
Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.
On [December 12th, 2016], Microsoft announced a new Microsoft Ventures fund dedicated to artificial intelligence (AI) investments, according to TechCrunch. The fund, part of the company’s investment arm that launched in May, will back startups developing AI technology and includes Element AI, a Montreal-based incubator that helps other companies embrace AI. The fund further supports Microsoft’s focus on AI. The company has been steadily announcing major initiatives in support of the technology. For example, in September, it announced a major restructuring and formed a new group dedicated to AI products. And in mid-November, it partnered with OpenAI, an AI research nonprofit backed by Elon Musk, to further its AI research and development efforts.
Whether Artificial Intelligence (AI) is something you’ve just come across or it’s something you’ve been monitoring for a while, there’s no denying that it’s starting to influence many industries. And one place that it’s really starting to change things is e-commerce. Below you’ll find some interesting stats and facts about how AI is growing in e-commerce and how it’s changing the way we do things. From personalizing the shopping experience for customers to creating personal buying assistants, AI is something retailers can’t ignore. We’ll also take a look at some examples of how leading online stores have used AI to enrich the customer buying experience.
Only 26 percent of computer professionals were women in 2013, according to a recent review by the American Association of University Women. That figure has dropped 9 percent since 1990.
Explanations abound. Some say the industry is masculine by design. Others claim computer culture is unwelcoming — even hostile — to women. So, while STEM fields like biology, chemistry, and engineering see an increase in diversity, computing does not. Regardless, it’s a serious problem.
Artificial intelligence is still in its infancy, but it’s poised to become the most disruptive technology since the Internet. AI will be everywhere — in your phone, in your fridge, in your Ford. Intelligent algorithms already track your online activity, find your face in Facebook photos, and help you with your finances. Within the next few decades they’ll completely control your car and monitor your heart health. An AI may one day even be your favorite artist.
The programs written today will inform the systems built tomorrow. And if designers all have one worldview, we can expect equally narrow-minded machines.
From DSC: Recently, my neighbor graciously gave us his old Honda snowblower, as he was getting a new one. He wondered if we had a use for it. As I’m definitely not getting any younger and I’m not Howard Hughes, I said, “Sure thing! That would be great — it would save my back big time! Thank you!” (Though the image below is not mine, it might as well be…as both are quite old now.)
Anyway…when I recently ran out of gas, I would have loved to be able to take out my iPhone, hold it up to this particular Honda snowblower and ask an app to tell me if this particular Honda snowblower takes a mixture of gas and oil, or does it have a separate container for the oil? (It wasn’t immediately clear where to put the oil in, so I’m figuring it’s a mix.)
But what I would have liked to have happen was:
I launched an app on my iPhone that featured machine learning-based capabilities
The app would have scanned the snowblower and identified which make/model it was and proceeded to tell me whether it needed a gas/oil mix (or not)
If there was a separate place to pour in the oil, the app would have asked me if I wanted to learn how to put oil in the snowblower. Upon me saying yes, it would then have proceeded to display an augmented reality-based training video — showing me where the oil was to be put in and what type of oil to use (links to local providers would also come in handy…offering nice revenue streams for advertisers and suppliers alike).
So several technologies would have to be involved here…but those techs are already here. We just need to pull them together in order to provide this type of useful functionality!
“Every child is a genius in his or her own way. VR can be the key to awakening the genius inside.”
This is the closing line of a new research study currently making its way out of China. Conducted by Beijing Bluefocus E-Commerce Co., Ltd and Beijing iBokan Wisdom Mobile Internet Technology Training Institution, the study takes a detailed look at the different ways virtual reality can make public education more effective.
“Compared with traditional education, VR-based education is of obvious advantage in theoretical knowledge teaching as well as practical skills training. In theoretical knowledge teaching, it boasts the ability to make abstract problems concrete, and theoretical thinking well-supported. In practical skills training, it helps sharpen students’ operational skills, provides an immersive learning experience, and enhances students’ sense of involvement in class, making learning more fun, more secure, and more active,” the study states.
CALIFORNIA — Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment [on 12/7/16] announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.
The goal of the Global Virtual Reality Association is to promote responsible development and adoption of VR globally. The association’s members will develop and share best practices, conduct research, and bring the international VR community together as the technology progresses. The group will also serve as a resource for consumers, policymakers, and industry interested in VR.
VR has the potential to be the next great computing platform, improving sectors ranging from education to healthcare, and contribute significantly to the global economy. Through research, international engagement, and the development of best practices, the founding companies of the Global Virtual Reality Association will work to unlock and maximize VR’s potential and ensure those gains are shared as broadly around the world as possible.
Occipital announced today that it is launching a mixed reality platform built upon its depth-sensing technologies called Bridge. The headset is available for $399 and starts shipping in March; eager developers can get their hands on an Explorer Edition for $499, which starts shipping next week.
From DSC: While I hope that early innovators in the AR/VR/MR space thrive, I do wonder what will happen if and when Apple puts out their rendition/version of a new form of Human Computer Interaction (or forms) — such as integrating AR-capabilities directly into their next iPhone.
Enterprise augmented reality applications ready for prime time — from internetofthingsagenda.techtarget.com by Beth Stackpole Pokémon Go may have put AR on the map, but the technology is now being leveraged for enterprise applications in areas like marketing, maintenance and field service.
Excerpt:
Unlike virtual reality, which creates an immersive, computer-generated environment, the less familiar augmented reality, or AR, technology superimposes computer-generated images and overlays information on a user’s real-world view. This computer-generated sensory data — which could include elements such as sound, graphics, GPS data, video or 3D models — bridges the digital and physical worlds. For an enterprise, the applications are boundless, arming workers walking the warehouse or selling on the shop floor, for example, with essential information that can improve productivity, streamline customer interactions and deliver optimized maintenance in the field.
2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year.
By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development.
VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.
IN an Australian first, education students will be able hone their skills without stepping foot in a classroom. Murdoch University has hosted a pilot trial of TeachLivE, a virtual reality environment for teachers in training.
The student avatars are able to disrupt the class in a range of ways that teachers may encounter such as pulling out mobile phones or losing their pen during class.
8 Cutting Edge Virtual Reality Job Opportunities— from appreal-vr.com by Yariv Levski Today we’re highlighting the top 8 job opportunities in VR to give you a current scope of the Virtual Reality job market.
The Epson Moverio BT-300, to give the smart glasses their full name, are wearable technology – lightweight, comfortable see-through glasses – that allow you to see digital data, and have a first person view (FPV) experience: all while seeing the real world at the same time. The applications are almost endless.
Volkswagen’s pivot away from diesel cars to electric vehicles is still a work in progress, but some details about its coming I.D. electric car — unveiled in Paris earlier this year — are starting to come to light. Much of the news is about an innovative augmented reality heads-up display Volkswagen plans to offer in its electric vehicles. Klaus Bischoff, head of the VW brand, says the I.D. electric car will completely reinvent vehicle instrumentation systems when it is launched at the end of the decade.
For decades, numerous research centers and academics around the world have been working the potential of virtual reality technology. Countless research projects undertaken in these centers are an important indicator that everything from health care to real estate can experience disruption in a few years.
…
Virtual Human Interaction Lab — Stanford University
Virtual Reality Applications Center — Iowa State University
Institute for Creative Technologies—USC
Medical Virtual Reality — USC
The Imaging Media Research Center — Korea Institute of Science and Technology
Virtual Reality & Immersive Visualization Group — RWTH Aachen University
Center For Simulations & Virtual Environments Research — UCIT
Duke immersive Virtual Environment —Duke University
Experimental Virtual Environments (EVENT) Lab for Neuroscience and Technology — Barcelona University
Immersive Media Technology Experiences (IMTE) — Norwegian University of Technology
Human Interface Technology Laboratory — University of Washington
Augmented Reality (AR) dwelled quietly in the shadow of VR until earlier this year, when a certain app propelled it into the mainstream. Now, AR is a household term and can hold its own with advanced virtual technologies. The AR industry is predicted to hit global revenues of $90 billion by 2020, not just matching VR but overtaking it by a large margin. Of course, a lot of this turnover will be generated by applications in the entertainment industry. VR was primarily created by gamers for gamers, but AR began as a visionary idea that would change the way that humanity interacted with the world around them. The first applications of augmented reality were actually geared towards improving human performance in the workplace… But there’s far, far more to be explored.
I stood at the peak of Mount Rainier, the tallest mountain in Washington state. The sounds of wind whipped past my ears, and mountains and valleys filled a seemingly endless horizon in every direction. I’d never seen anything like it—until I grabbed the sun.
Using my HTC Vive virtual reality wand, I reached into the heavens in order to spin the Earth along its normal rotational axis, until I set the horizon on fire with a sunset. I breathed deeply at the sight, then spun our planet just a little more, until I filled the sky with a heaping helping of the Milky Way Galaxy.
Virtual reality has exposed me to some pretty incredible experiences, but I’ve grown ever so jaded in the past few years of testing consumer-grade headsets. Google Earth VR, however, has dropped my jaw anew. This, more than any other game or app for SteamVR’s “room scale” system, makes me want to call every friend and loved one I know and tell them to come over, put on a headset, and warp anywhere on Earth that they please.
In VR architecture, the difference between real and unreal is fluid and, to a large extent, unimportant. What is important, and potentially revolutionary, is VR’s ability to draw designers and their clients into a visceral world of dimension, scale, and feeling, removing the unfortunate schism between a built environment that exists in three dimensions and a visualization of it that has until now existed in two.
Many of the VR projects in Architecture are focused on the final stages of design process, basically for selling a house to a client. Thomas sees the real potential in the early stages: when the main decisions need to be made. VR is so good for this, as it helps for non professionals to understand and grasp the concepts of architecture very intuitively. And this is what we talked mostly about.
A proposed benefit of virtual reality is that it could one day eliminate the need to move our fleshy bodies around the world for business meetings and work engagements. Instead, we’ll be meeting up with colleagues and associates in virtual spaces. While this would be great news for the environment and business people sick of airports, it would be troubling news for airlines.
Imagine during one of your future trials that jurors in your courtroom are provided with virtual reality headsets, which allow them to view the accident site or crime scene digitally and walk around or be guided through a 3D world to examine vital details of the scene.
How can such an evidentiary presentation be accomplished? A system is being developed whereby investigators use a robot system inspired by NASA’s Curiosity Mars rover using 3D imaging and panoramic videography equipment to record virtual reality video of the scene.6 The captured 360° immersive video and photographs of the scene would allow recreation of a VR experience with video and pictures of the original scene from every angle. Admissibility of this evidence would require a showing that the VR simulation fairly and accurately depicts what it represents. If a judge permits presentation of the evidence after its accuracy is established, jurors receiving the evidence could turn their heads and view various aspects of the scene by looking up, down, and around, and zooming in and out.
Unlike an animation or edited video initially created to demonstrate one party’s point of view, the purpose of this type of evidence would be to gather data and objectively preserve the scene without staging or tampering. Even further, this approach would allow investigators to revisit scenes as they existed during the initial forensic examination and give jurors a vivid rendition of the site as it existed when the events occurred.
The theme running throughout most of this year’s WinHEC keynote in Shenzhen, China was mixed reality. Microsoft’s Alex Kipman continues to be a great spokesperson and evangelist for the new medium, and it is apparent that Microsoft is going in deep, if not all in, on this version of the future. I, for one, as a mixed reality or bust developer, am very glad to see it.
As part of the presentation, Microsoft presented a video (see below) that shows the various forms of mixed reality. The video starts with a few virtual objects in the room with a person, transitions into the same room with a virtual person, then becomes a full virtual reality experience with Windows Holographic.
From DSC: In the future, I’d like to see holograms provide stunning visual centerpieces for the entrance ways into libraries, or in our classrooms, or in our art galleries, recital halls, and more. The object(s), person(s), scene(s) could change into something else, providing a visually engaging experience that sets a tone for that space, time, and/or event.
Eventually, perhaps these types of technologies/setups will even be a way to display artwork within our homes and apartments.
From DSC: When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?
What does it mean for:
Students / learners
Faculty members
Teachers
Trainers
Instructional Designers
Interaction Designers
User Experience Designers
Curriculum Developers
…and others?
Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
Excerpt:
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
From DSC:
(With thanks to Woontack Woo for his posting this via his paper.li entitled “#AR #CAMAR for Ubiquitous VR”)
Check this out!
On December 3rd, the Legend of Sword opera comes to Australia — but this is no ordinary opera! It is a “holographic sensational experience!” Set designers and those involved with drama will need to check this out. This could easily be the future of set design!
But not only that, let’s move this same concept over to the world of learning. What might augmented reality do for how our learning spaces look and act like in the future? What new affordances and experiences could they provide for us? This needs to be on our radars.
Legend of Sword 1 is a holographic sensational experience that has finished its 2nd tour in China. A Chinese legend of the ages to amaze and ignite your imagination. First time ever such a visual spectacular stage in Australia on Sat 3rd Dec only. Performed in Chinese with English subtitles.
Legend of Sword and Fairy 1 is based on a hit video game in China. Through the hardworking of the renowned production team, the performance illustrates the beautiful fantasy of game on stage, and allow the audience feel like placing themselves in the eastern fairy world. With the special effects with the olfactory experience, and that actors performing and interact with audience at close distance, the eastern fairy world is realised on stage. It is not only a play with beautiful scenes, but also full of elements from oriental style adventure. The theatre experience will offer much more than a show, but the excitement of love and adventure.
Legend of Sword and Fairy 1 was premiered in April 2015 at Shanghai Cultural Plaza, which set off a frenzy of magic in Shanghai, relying on the perfect visual and 5D all-round sensual experience. Because of the fantasy theme that matches with top visual presentation, Legend of Sword and Fairy 1 became the hot topic in Shanghai immediately. With only just 10 performances at the time, its Weibo topic hits have already exceeded 100 million mark halfway.
So far, Legend of Sword and Fairy 1 has finished its second tour in a number of cities in China, including Beijing, Chongqing, Chengdu, Nanjing, Xiamen, Qingdao, Shenyang, Dalian, Wuxi, Ningbo, Wenzhou, Xi’an, Shenzhen, Dongguan, Huizhou, Zhengzhou, Lishui, Ma’anshan, Kunshan, Changzhou etc.
The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.
The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.
“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”
Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.
…
At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.
…
At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.
As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
Etc.
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
From DSC: How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?
That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:
A credentials database on LinkedIn (via blockchain) and/or
A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
and/or
To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)
Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?
Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?
Will bots and/or human tutors be instantly accessible from our couches?
From DSC: Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)!
The educational benefits — as well as the business/profit-related benefits will certainly be significant!
For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices.(Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)
Some use cases for such an app:
Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!
They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.
In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc. in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.
Or let’s look at the potential uses of this type of app from some different angles.
Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have anyEastern Poison Ivyin it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivyfor you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).
Or consider another use of such an app:
A homeowner who wants to get rid of a certain kind of weed. The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.
Or consider another use of such an app:
A homeowner has a diseased tree, and they want to know what to do about it.The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.
Or consider other/similar apps along these lines:
Skin ML (for detecting any issues re: acme, skin cancers, etc.)
Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
Fish ML
Etc.
Image from gettyimages.com
So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.
Recently, I visited Bellevue Arts Museum http://www.bellevuearts.org/ and conceived of a ‘Holographic art sculpture’ for installation in the museum’s beautiful atrium. Using my app Typography Insight for HoloLens, http://typeinsight.org/hololens.html, I created a ‘Holographic Type Sculpture’ and placed it in Bellevue Arts Museum’s atrium and rooftop sculpture garden (coincidentally, its name is ‘Court of Light’). You can experience the Mixed Reality Capture below.
Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.
Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.
The headset and the Google Daydream platform will launch in November.
While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”
Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.
Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.
Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.
Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.
You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.
On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.
…
Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.
A comment from DC: More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?
Natural language processing, cognitive computing, and artificial intelligence continue their march forward.
2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.
Some of them are already out while others are in development.
It’s no secret that we here at Labster are pretty excited about VR. However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.
Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.
Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.
…
According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.
The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.
Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities
Excerpt:
German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.
German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.
Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.
Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.
Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films. First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.
If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.
The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.
The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.
For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.
All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.
Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”
“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.