From DSC: In the future, I’d like to see holograms provide stunning visual centerpieces for the entrance ways into libraries, or in our classrooms, or in our art galleries, recital halls, and more. The object(s), person(s), scene(s) could change into something else, providing a visually engaging experience that sets a tone for that space, time, and/or event.
Eventually, perhaps these types of technologies/setups will even be a way to display artwork within our homes and apartments.
From DSC: When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?
What does it mean for:
Students / learners
Faculty members
Teachers
Trainers
Instructional Designers
Interaction Designers
User Experience Designers
Curriculum Developers
…and others?
Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
Excerpt:
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
From DSC:
(With thanks to Woontack Woo for his posting this via his paper.li entitled “#AR #CAMAR for Ubiquitous VR”)
Check this out!
On December 3rd, the Legend of Sword opera comes to Australia — but this is no ordinary opera! It is a “holographic sensational experience!” Set designers and those involved with drama will need to check this out. This could easily be the future of set design!
But not only that, let’s move this same concept over to the world of learning. What might augmented reality do for how our learning spaces look and act like in the future? What new affordances and experiences could they provide for us? This needs to be on our radars.
Legend of Sword 1 is a holographic sensational experience that has finished its 2nd tour in China. A Chinese legend of the ages to amaze and ignite your imagination. First time ever such a visual spectacular stage in Australia on Sat 3rd Dec only. Performed in Chinese with English subtitles.
Legend of Sword and Fairy 1 is based on a hit video game in China. Through the hardworking of the renowned production team, the performance illustrates the beautiful fantasy of game on stage, and allow the audience feel like placing themselves in the eastern fairy world. With the special effects with the olfactory experience, and that actors performing and interact with audience at close distance, the eastern fairy world is realised on stage. It is not only a play with beautiful scenes, but also full of elements from oriental style adventure. The theatre experience will offer much more than a show, but the excitement of love and adventure.
Legend of Sword and Fairy 1 was premiered in April 2015 at Shanghai Cultural Plaza, which set off a frenzy of magic in Shanghai, relying on the perfect visual and 5D all-round sensual experience. Because of the fantasy theme that matches with top visual presentation, Legend of Sword and Fairy 1 became the hot topic in Shanghai immediately. With only just 10 performances at the time, its Weibo topic hits have already exceeded 100 million mark halfway.
So far, Legend of Sword and Fairy 1 has finished its second tour in a number of cities in China, including Beijing, Chongqing, Chengdu, Nanjing, Xiamen, Qingdao, Shenyang, Dalian, Wuxi, Ningbo, Wenzhou, Xi’an, Shenzhen, Dongguan, Huizhou, Zhengzhou, Lishui, Ma’anshan, Kunshan, Changzhou etc.
The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.
The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.
“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”
Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.
…
At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.
…
At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.
As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
Etc.
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
From DSC: How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?
That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:
A credentials database on LinkedIn (via blockchain) and/or
A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
and/or
To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)
Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?
Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?
Will bots and/or human tutors be instantly accessible from our couches?
From DSC: Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)!
The educational benefits — as well as the business/profit-related benefits will certainly be significant!
For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices.(Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)
Some use cases for such an app:
Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!
They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.
In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc. in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.
Or let’s look at the potential uses of this type of app from some different angles.
Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have anyEastern Poison Ivyin it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivyfor you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).
Or consider another use of such an app:
A homeowner who wants to get rid of a certain kind of weed. The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.
Or consider another use of such an app:
A homeowner has a diseased tree, and they want to know what to do about it.The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.
Or consider other/similar apps along these lines:
Skin ML (for detecting any issues re: acme, skin cancers, etc.)
Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
Fish ML
Etc.
Image from gettyimages.com
So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.
Recently, I visited Bellevue Arts Museum http://www.bellevuearts.org/ and conceived of a ‘Holographic art sculpture’ for installation in the museum’s beautiful atrium. Using my app Typography Insight for HoloLens, http://typeinsight.org/hololens.html, I created a ‘Holographic Type Sculpture’ and placed it in Bellevue Arts Museum’s atrium and rooftop sculpture garden (coincidentally, its name is ‘Court of Light’). You can experience the Mixed Reality Capture below.
Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.
Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.
The headset and the Google Daydream platform will launch in November.
While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”
Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.
Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.
Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.
Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.
You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.
On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.
…
Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.
A comment from DC: More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?
Natural language processing, cognitive computing, and artificial intelligence continue their march forward.
2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.
Some of them are already out while others are in development.
It’s no secret that we here at Labster are pretty excited about VR. However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.
Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.
Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.
…
According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.
The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.
Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities
Excerpt:
German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.
German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.
Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.
Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.
Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films. First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.
If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.
The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.
The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.
For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.
All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.
Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”
“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.
From DSC: I have attended theNext Generation Learning Spaces Conferencefor the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.
For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.
The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.
Key takeaways for the panel discussion:
Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
An update on the state of the approaching ed tech landscape
Creative, new thinking: What might our next generation learning environments look like in 5-10 years?
I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check outthe conferenceandregister soon to take advantage of the early bird discounts.
Virtual reality technology holds enormous potential to change the future for a number of fields, from medicine, business, architecture to manufacturing.
Psychologists and other medical professionals are using VR to heighten traditional therapy methods and find effective solutions for treatments of PTSD, anxiety and social disorders. Doctors are employing VR to train medical students in surgery, treat patients’ pains and even help paraplegics regain body functions.
In business, a variety of industries are benefiting from VR. Carmakers are creating safer vehicles, architects are constructing stronger buildings and even travel agencies are using it to simplify vacation planning.
Google has unveiled a new interactive online exhibit that take users on a tour of 10 Downing street in London — home of the U.K. Prime Minister.
The building has served as home to countless British political leaders, from Winston Churchill and Margaret Thatcher through to Tony Blair and — as of a few months ago — Theresa May. But, as you’d expect in today’s security-conscious age, gaining access to the residence isn’t easy; the street itself is gated off from the public. This is why the 10 Downing Street exhibit may capture the imagination of politics aficionados and history buffs from around the world.
The tour features 360-degree views of the various rooms, punctuated by photos and audio and video clips.
In a slightly more grounded environment, the HoloLens is being used to assist technicians in elevator repairs.
Traversal via elevator is such a regular part of our lifestyles, its importance is rarely recognized…until they’re not working as they should be. ThyssenKrupp AG, one of the largest suppliers for elevators, recognizes how essential they are as well as how the simplest malfunctions can deter the lives of millions. Announced on their blog, Microsoft is partnering with Thyssenkrupp to equip 24,000 of their technicians with HoloLens.
Insert from DSC re: the above piece re: HoloLens:
Will technical communicators need to augment their skillsets? It appears so.
But in a world where no moment is too small to record with a mobile sensor, and one in which time spent in virtual reality keeps going up, interesting parallels start to emerge with our smartphones and headsets.
Let’s look at how the future could play out in the real world by observing three key drivers: VR video adoption, mobile-video user needs and the smartphone camera rising tide.
“Individuals with autism may become overwhelmed and anxious in social situations,” research clinician Dr Nyaz Didehbani said.
“The virtual reality training platform creates a safe place for participants to practice social situations without the intense fear of consequence,” said Didehbani.
The participants who completed the training demonstrated improved social cognition skills and reported better relationships, researchers said.
AI chatbot apps to infiltrate businesses sooner than you think — from searchbusinessanalytics.techtarget.com by Bridget Botelho Artificial intelligence chatbots aren’t the norm yet, but within the next five years, there’s a good chance the sales person emailing you won’t be a person at all.
Excerpt:
In fact, artificial intelligence has come so far so fast in recent years, Gartner predicts it will be pervasive in all new products by 2020, with technologies including natural language capabilities, deep neural networks and conversational capabilities.
Other analysts share that expectation. Technologies that encompass the umbrella term artificial intelligence — including image recognition, machine learning, AI chatbots and speech recognition — will soon be ubiquitous in business applications as developers gain access to it through platforms such as the IBM Watson Conversation API and the Google Cloud Natural Language API.
Facebook introduced chatbots on Messenger three months ago, and the search giant has shared today that over 11,000 bots are active on the messaging service. The Messenger Platform has picked up an update that adds a slew of new features to bots, such as a persistent menu that lists a bot’s commands, quick replies, ability to respond with GIFs, audio, video, and other files, and a rating system to provide feedback to bot developers.
In another example, many businesses use interactive voice response (IVR) telephony systems, which have limited functionalities and often provide a poor user experience. Chatbots can replace these applications in future where the user will interact naturally to get relevant information without following certain steps or waiting for a logical sequence to occur.
…
Chatbots are a good starting point, but the future lies in more advanced versions of audio and video bots. Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Google with its voice assistance, are working in the same direction to achieve it. Bot ecosystems will become even more relevant in the phase of IoT mass adoption and improvement of input/output (I/O) technology.
With big players investing heavily in AI, Chatbots are likely to be an increasing feature of social media and other communications platforms.
Chatbots are software programs that use messaging platforms as the interface to perform a wide variety of tasks—everything from scheduling a meeting to reporting the weather, to helping a customer buy a sweater.
Because texting is the heart of the mobile experience for smartphone users, chatbots are a natural way to turn something users are very familiar with into a rewarding service or marketing opportunity.
And when you consider that the top 4 messaging apps reach over 3 billion global users (MORE than the top 4 social networks), you can see that the opportunity is huge.
The Xiaoice chat bot — pronounced “Shao-ice” and translated as “little Bing” — born as an experiment by Microsoft Research in 2014, reaches 40 million followers in China, who often literally talk with her for hours.
At her most active, Xiaoice is holding down as many 23 conversations a session, says Microsoft Research NExT leader Dr. Peter Lee. It’s even evolved to become a nice little sideline business for Microsoft, thanks to a partnership with Chinese e-retailer JD.com that lets users buy products by talking to Xiaoice.
The reason Xiaoice is so successful is she was born of a different kind of philosophical experiment: Instead of building a chat bot that was useful, Microsoft simply tried to make it fun to talk to.
Regarding the new Mirror product from Estimote — i.e., the world’s 1st video-enabled beacon — what might the applications look like for active learning classrooms (ALCs)?
That is, could students pre-load their content, then come into an active learning classroom and, upon request, launch an app which would then present their content to the nearest display?
Today we want to move contextual computing to a completely new level. We are happy to announce our newest product: Estimote Mirror. It’s the world’s first video-enabled beacon. Estimote Mirror can not only communicate with nearby phones and their corresponding apps, but also take content from these apps and display it on any digital screen around you.