“The result is a really strong sense of presence,” said David Cole, who helped found NextVR as a 3D company in 2009. “A vivid sense.”
“In some ways, we could still be at a point in time where a lot of people don’t yet know that they want this in VR,” said David Cramer, NextVR’s chief operating officer. “The thing that we’ve seen is that when people do see it, it just blows away their expectations.”
From DSC: Hmm…the above piece from The Mercury News on#VRspeaks of presence. A vivid sense of presence.
If they can do this with an NBA game, why cant’ we do this with remote learners & bring them into face-to-face classrooms? How might VR be used in online learning and distance education? Could be an interesting new revenue stream for colleges and universities…and help serve more people who want to learn but might not be able to move to certain locations and/or not be able to attend face-to-face classrooms. Applications could exist within the corporate training/L&D world as well.
The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).
From DSC: Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies. The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
Teachers at Coppell Independent School District have become the first to use a new IBM and Apple technology platform built to aid personalized learning.
IBM Watson Element for Educators pairs IBM analytics and data tools such as cognitive computing with Apple design. It integrates student grades, interests, participation, and trends to help educators determine how a student learns best, the company says.
It also recommends learning content personalized to each student. The platform might suggest a reading assignment on astronomy for a young student who has shown an interest in space.
From DSC: Technologies involved with systems like IBM’s Watson will likely bring some serious impact to the worlds of education and training & development. Such systems — and the affordances that they should be able to offer us — should not be underestimated. The potential for powerful, customized, personalized learning could easily become a reality in K-20 as well as in the corporate training space. This is an area to keep an eye on for sure, especially with the growing influence of cognitive computing and artificial intelligence.
These kinds of technology should prove helpful in suggesting modules and courses (i.e., digital learning playlists), but I think the more powerful systems will be able to drill down far more minutely than that. I think these types of systems will be able to assist with all kinds of math problems and equations as well as analyze writing examples, correct language mispronunciations, and more (perhaps this is already here…apologies if so). In other words, the systems will “learn” where students can go wrong doing a certain kind of math equation…and then suggest steps to correct things when the system spots a mistake (or provide hints at how to correct mistakes).
This road takes us down to places where we have:
Web-based learner profiles — including learner’s preferences, passions, interests, skills
Microlearning/badging/credentialing — likely using blockchain
Learning agents/bots to “contact” for assistance
Guidance for lifelong learning
More choice, more control
First IBM Watson Education App for iPad Delivers Personalized Learning for K-12 Teachers and Students — from prnewswire.com Educators at Coppell Independent School District in Texas first to use new iPad app to tailor learning experiences to student’s interests and aptitudes
Excerpts: With increasing demands on educators, teachers need tools that will enable them to better identify the individual needs of all students while designing learning experiences that engage and hold the students’ interest as they master the content. This is especially critical given that approximately one third of American students require remedial education when they enter college today, and current college attainment rates are not keeping pace with the country’s projected workforce needs1. A view of academic and day-to-day updates in real time can help teachers provide personalized support when students need it.
IBM Watson Element provides teachers with a holistic view of each student through a fun, easy-to-use and intuitive mobile experience that is a natural extension of their work. Teachers can get to know their students beyond their academic performance, including information about personal interests and important milestones students choose to share. For example, teachers can input notes when a student’s highly anticipated soccer match is scheduled, when another has just been named president for the school’s World Affairs club, and when another has recently excelled following a science project that sparked a renewed interest in chemistry.The unique “spotlight” feature in Watson Element provides advanced analytics that enables deeper levels of communication between teachers about their students’ accomplishments and progress. For example, if a student is excelling academically, teachers can spotlight that student, praising their accomplishments across the school district. Or, if a student received a top award in the district art show, a teacher can spotlight the student so their other teachers know about it.
If you enjoyed this article, please consider sharing it!
Free app Duolingo is a great way to learn the basics of a new language, with small daily lessons that gradually increase your skills, with rewards for progressing. Now the service has added a new feature that’s a little different from the back-and-forth translation — text-based chatbots.
These are aimed at helping you improve your conversational skills and skills you might use in real life, such as ordering food, visiting a tourist attraction, shopping for clothing or catching a cab. A variety of scenarios will see you learning how to follow a set of directions, or talk with a doctor. According to the Duolingo chatbot Web page, these bots are programmed to react to thousands of different responses.
Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
What new jobs/positions will be created by these new forms of HCI?
Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape? Will that be enough?
Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
Will colleges and universities build and offer more courses involving HCI?
Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
How will languages and language translation be impacted by voice recognition software?
Will new devices be introduced to our classrooms in the future?
In the corporate space, how will training departments handle these new needs and opportunities? How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?
As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot? What types of positions created it? Who all could benefit from it? What other platforms could these technologies be integrated into? Besides the home, where else might we find these types of devices?
Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.
Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.
Or how might students learn about the myriad of technologies involved withIBM’s Watson? What courses are out there today that address this type of thing? Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?
Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.
If you enjoyed this article, please consider sharing it!
If you’ve ever wanted to try out the Amazon Echo before shelling out for one, you can now do just that right from your browser. Amazon has launched a dedicated website where you can try out an Echo simulation and put Alexa’s myriad of skills to the test.
From DSC: The use of the voice and gesture to communicate to some type of computing device or software program represent growing types of Human Computer Interaction (HCI). With the growth of artificial intelligence (AI), personal assistants, and bots, we should expect to see more voice recognition services/capabilities baked into an increasing amount of products and solutions in the future.
Given these trends, personnel working within K-12 and higher ed need to start building their knowledgebases now so that we can begin offering more courses in the near future to help students build their skillsets. Current user experience designers, interface designers, programmers, graphic designers, and others will also need to augment their skillsets.
If you enjoyed this article, please consider sharing it!
Which is why we’re pleased to introduce…the Google assistant. The assistant is conversational—an ongoing two-way dialogue between you and Google that understands your world and helps you get things done. It makes it easy to buy movie tickets while on the go, to find that perfect restaurant for your family to grab a quick bite before the movie starts, and then help you navigate to the theater. It’s a Google for you, by you.
Google Home is a voice-activated product that brings the Google assistant to any room in your house. It lets you enjoy entertainment, manage everyday tasks, and get answers from Google—all using conversational speech. With a simple voice command, you can ask Google Home to play a song, set a timer for the oven, check your flight, or turn on your lights. It’s designed to fit your home with customizable bases in different colors and materials. Google Home will be released later this year.
Mobile apps often provide a better user experience than browser-based web apps, but you first have to find them, download them, and then try not to forget you installed them. Now, Google wants us to rethink what mobile apps are and how we interact with them.
Instant Apps, a new Android feature Google announced at its I/O developer conference today but plans to roll out very slowly, wants to bridge this gap between mobile apps and web apps by allowing you to use native apps almost instantly — even when you haven’t previously installed them — simply by tapping on a URL.
To the disappointment of many, Google Vice President of Virtual Reality Clay Bavor did not announce the much-rumoured (and now discredited) standalone VR HMD at today’s Google I/O keynote.
Instead, the company announced a new platform for VR on the upcoming Android N to live on called Daydream. Much like Google’s pre-existing philosophy of creating specs and then pushing the job of building hardware to other manufacturers, the group is providing the boundaries for the initial public push of VR on Android, and letting third-parties build the phones for it.
Speaking at the opening keynote for this week’s Google I/O developer conference, the company’s head of VR Clay Bavor announced that the latest version of Android, the unnamed Android N, would be getting a VR mode. Google calls the initiative to get the Android ecosystem ready for VR ‘Daydream’, and it sounds like a massive extension of the groundwork laid by Google Cardboard.
Google has announced a new smart messaging app, Allo. The app is based on your phone number, and it will continue to learn from you over time, making it smarter each day. In addition to this, you can add more emotion to your messages, in ways that you couldn’t before. You will be able to “whisper” or “shout” your message, and the font size will change depending on which you select. This is accomplished by pressing the send button and dragging up or down to change the level of emotion.
Like Facebook’s bots, the Google assistant is designed to be conversational. It will play on the company’s investment in natural language processing, talking to users in a dialogue format that feels like normal conversation, and helping users buy movie tickets, make dinner reservations and get directions. The announcement comes one month after Facebook CEO Mark Zuckerberg introduced Messenger with chatbots, which serves basically the same function.
Key point from DSC: Digitally-based means of learning are going to skyrocket!!! Far more than what we’ve seen so far! There are several trends that are occurring to make this so.
As background here, some of the keywords and phrases that are relevant to this posting include:
Wireless content sharing
Wireless collaboration solutions
Active learning based classrooms
Bring Your Own Device (BYOD)
Enterprise wireless display solutions
Enterprise collaboration solutions
Cross platform support: iOS, Android, Windows
Some of the relevant products in this area include:
Mezzanine from Oblong Industries
Montage from DisplayNote Technologies
ThinkHub and ViewHub from T1V
Haworth Workware Wireless
NovoConnect from Vivitek
First of all, consider the following products and the functionalities they offer.
People who are in the same physical space can collaborate with people from all over the world — no matter if they are at home, in another office, on the road, etc.
For several of these products,remote employees/consultants/trainers/learners can contribute content to the discussions, just like someone in the same physical location can.
Many of these sorts of systems & software are aimed at helping people collaborate — again,regardless of where they are located.Remote learners/content contributors are working in tandem with a group of people in the same physical location. If this is true in business, why can’t it be true in the world of education?
So keep that in mind, as I’m now going to add on a few other thoughts and trends that build upon these sorts of digitally-based means of collaborating.
Q: Towards that end…ask yourself, what do the following trends and items have in common?
The desire to capture and analyze learner data to maximize learning
Colleges’ and universities’ need to increase productivity (which is also true in the corporate & K-12 worlds)
The trend towards implementing more active learning-based environments
The increasing use of leveraging students’ devices for their learning (i.e., the BYOD phenomenon)
The continued growth and increasing sophistication of algorithms
A: All of these things may cause digitally-based means of learning to skyrocket!!!
To wrap up this line of thought, below are some excerpts from recent articles that illustrate what I’m trying to get at here.
Embrace the Power of Data A continuous improvement mindset is important. Back-end learning analytics, for example, can reveal where large numbers of students are struggling, and may provide insights into questions that require new feedback or content areas that need more development. Data can also highlight how students are interacting with the content and illuminate things that are working well—students’ lightbulb moments.
Mitchell gave the example of flight simulators, which not only provide students with a way to engage in the activity that they want to learn, but also have data systems that monitor students’ learning over time, providing them with structured feedback at just the right moment. This sort of data-centric assessment of learning is happening in more and more disciplines — and that opens the door to more innovation, he argued.
A promising example, said Thille, is the use of educational technology to create personalized and adaptive instruction. As students interact with adaptive technology, the system collects large amounts of data, models those data, and then makes predictions about each student based on their interactions, she explained. Those predictions are then used for pedagogical decision-making — either feeding information back into the system to give the student a personalized learning path, or providing insights to faculty to help them give students individualized support.
“We need the models and the data to be open, transparent, peer-reviewable and subject to academic scrutiny.”
“We began to actually examine what we could do differently — based not upon hunches and traditions, but upon what the data told us the problems were for the students we enroll,” said Renick. “We made a commitment not to raise our graduation rate through getting better students, but through getting better — and that gain meant looking in the mirror and making some significant changes.”
A 21st-century learning culture starts with digital content. In 2010, Jackson State University was looking for ways that technology could better address the needs of today’s learner. “We put together what we call our cyberlearning ecosystem,” said Robert Blaine, dean of undergraduate studies and cyberlearning. “What that means is that we’re building a 21st-century learning culture for all of our students, writ large across campus.” At the core of that ecosystem is digital content, delivered via university-supplied iPads.
On Bennett’s wish list right now is an application that allows students to give feedback at specific points of the videos that they’re watching at home. This would help him pinpoint and fix any “problem” areas (e.g. insufficient instructions for difficult topics/tasks) and easily see where students are experiencing the most difficulties.
TechSmith’s now-retired “Ask3” video platform, for example, would have done the trick. It allowed users to watch a video and ask text-based questions at the point where playback was stopped. “I’d like to be able to look at my content and say, ‘Here’s a spot where there are a lot of questions and confusion,'” said Bennett, who also sees potential in an “I get it” button that would allow students to hit the button when everything clicks. “That would indicate the minimum viable video that I’d need to produce.”Learning Catalyticsoffers a similar product at a fee, Bennett said, “but I can’t charge my students $20 a year to use it.”
Everyone needs an expert sometimes; a helping hand to point you in the right direction so you can get the job done.
Maybe you’re a mechanic working on an exotic car. You know everything there is to know about cars in general — but only a handful of people really know this car. Alas, they’re all on the other side of the planet.
Maybe you’re working on an oil rig, and one of the panels is throwing out errors. “REPLACE VALVE 6B”, reads the screen. You know how to replace a valve! You… just don’t know where said valve is. Your company has experts for this — but they’ve all been called off to other rigs.
Maybe you’re just at home trying to figure out which of the zillion poorly labeled ports on that shiny new A/V receiver is the one that can support a 4k resolution. All you need is someone to point out the right one, and you’re set.
ScopeAR, a company from YC’s Summer 2015 class, wants to help experts be anywhere they need to be via the magic of augmented reality.
If you enjoyed this article, please consider sharing it!