MEDFORD, Mass. — Amory Kahan, 7, wanted to know when it would be snack time. Harvey Borisy, 5, complained about a scrape on his elbow. And Declan Lewis, 8, was wondering why the two-wheeled wooden robot he was programming to do the Hokey Pokey wasn’t working. He sighed, “Forward, backward, and it stops.”
Declan tried it again, and this time the robot shook back and forth on the gray rug. “It did it!” he cried. Amanda Sullivan, a camp coordinator and a postdoctoral researcher in early childhood technology, smiled. “They’ve been debugging their Hokey Pokeys,” she said.
The children, at a summer camp last month run by the Developmental Technologies Research Group at Tufts University, were learning typical kid skills: building with blocks, taking turns, persevering through frustration. They were also, researchers say, learning the skills necessary to succeed in an automated economy.
Technological advances have rendered an increasing number of jobs obsolete in the last decade, and researchers say parts of most jobs will eventually be automated. What the labor market will look like when today’s young children are old enough to work is perhaps harder to predict than at any time in recent history. Jobs are likely to be very different, but we don’t know which will still exist, which will be done by machines and which new ones will be created.
Amazon’s Alexa voice platform has now passed 15,000 skills — the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. The figure is up from the 10,000 skills Amazon officially announced back in February, which had then represented a 3x increase from September.
The new 15,000 figure was first reported via third-party analysis from Voicebot, and Amazon has now confirmed to TechCrunch that the number is accurate.
According to Voicebot, which only analyzed skills in the U.S., the milestone was reached for the first time on June 30, 2017. During the month of June, new skill introductions increased by 23 percent, up from the less than 10 percent growth that was seen in each of the prior three months.
The milestone also represents a more than doubling of the number of skills that were available at the beginning of the year, when Voicebot reported there were then 7,000 skills. That number was officially confirmed by Amazon at CES.
From DSC: Again, I wonder…what are the implications for learning from this new, developing platform?
For years, students have turned to CliffsNotes for speedy reads of books, SparkNotes to whip up talking points for class discussions, and Wikipedia to pad their papers with historical tidbits. But today’s students have smarter tools at their disposal—namely, Wolfram|Alpha, a program that uses artificial intelligence to perfectly and untraceably solve equations.Wolfram|Alpha uses natural language processing technology, part of the AI family, to provide students with an academic shortcut that is faster than a tutor, more reliable than copying off of friends, and much easier than figuring out a solution yourself.
Use of Wolfram|Alpha is difficult to trace, and in the hands of ambitious students, its perfect solutions are having unexpected consequences.
How is IBM using Watson’s intelligent tutoring system? So we are attempting to mimic the best practices of human tutoring. The gold standard will always remain one on one human to human tutoring. The whole idea here is an intelligent tutoring system as a computing system that works autonomously with learners, so there is no human intervention. It’s basically pretending to be the teacher itself and it’s working with the learner. What we’re attempting to do is we’re attempting to basically put conversational systems, systems that understand human conversation and dialogue, and we’re trying to build a system that, in a very natural way, interacts with people through conversation. The system basically has the ability to ask questions, to answer questions, to know who you are and where you are in your learning journey, what you’re struggling with, what you’re strong on and it will personalize its pedagogy to you.
…
There’s a natural language understanding system and a machine learning system that’s trying to figure out where you are in your learning journey and what the appropriate intervention is for you. The natural language system enables this interaction that’s very rich and conversation-based, where you can basically have a human-like conversation with it and, to a large extent, it will try to understand and to retrieve the right things for you. Again the most important thing is that we will set the expectations appropriately and we have appropriate exit criteria for when the system doesn’t actually understand what you’re trying to do.
Grush: Then what are some of the implications you could draw from metrics like that one?
Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”
While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.
Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves.This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilledto exploit AI rather than compete with it…”
A side note from DSC: I began working on this vision prior to 2010…but I didn’t officially document it until 2012.
Learning from the Living [Class] Room:
A global, powerful, next generation learning platform
What does the vision entail?
A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
(Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
(Potentially) Direct access to popular job search sites
(Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
(Potentially) Integration with one-on-one tutoring services
Addendum from DSC (regarding the resource mentioned below): Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”
The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.
Holographic storytelling— from jwtintelligence.com The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies. New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book. Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USCInstitute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.
Winner takes all— from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han
We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.
— Larry Page, CEO, Alphabet
Excerpt:
An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.
If you look closely, the world’s top technology companies are making similar bets.
Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.
Here’s a short list of general tasks that deep learning can perform in real situations:
Identify faces (or more generally image categorization)
Read handwritten digits and texts
Recognize speech (no more transcribing interviews yourself)
Translate languages
Play computer games
Control self-driving cars (and other types of robots)
And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!
…
Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):
Learning from the Living [Class] Room: A vision for a global, powerful, next generation learning platform
By Daniel Christian
NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.
I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.
Learning from the Living [Class] Room:
A global, powerful, next generation learning platform
What does the vision entail?
A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
(Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
(Potentially) Direct access to popular job search sites
(Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:
A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
A need to display multiple things going on at once, such as:
The SME(s)
An application or multiple applications that the SME(s) are using
Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
The ability to annotate on top of the application(s) and point to things w/in the app(s)
Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)
This new learning platform will also feature:
Voice-based commands to drive the system (via Natural Language Processing (NLP))
Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
Chatbots
For learning how to use the system
For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
For asking questions within a course
As many profiles as needed per household
(Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
(Optional) Voice recognition to efficiently launch the desired profile
(Optional) Facial recognition to efficiently launch the desired profile
(Optional) Upon system launch, to immediately return to where the learner previously left off
The capability of the webcam to recognize objects and bring up relevant resources for that object
A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”
In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.
Likely players:
Amazon – personal assistance via Alexa
Apple – personal assistance via Siri
Google – personal assistance via Google Assistant; language translation
Facebook — personal assistance via M
Microsoft – personal assistance via Cortana; language translation
IBM Watson – cognitive computing; language translation
SYDNEY, June 12, 2017 /PRNewswire/ — Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds, being the first of its kind to hit global markets next month.
…
Unveiled at last week’s United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland, the Translate One2One earpiece supports translations across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German and Chinese. Available to purchase today for delivery in July, the earpiece carries a price tag of $179 USD, and is the first independent translation device that doesn’t rely on Bluetooth or Wi-Fi connectivity.
Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds.
From DSC: How much longer before this sort of technology gets integrated into videoconferencing and transcription tools that are used in online-based courses — enabling global learning at a scale never seen before? (Or perhaps NLP-based tools are already being integrated into global MOOCs and the like…not sure.) It would surely allow for us to learn from each other in a variety of societies throughout the globe.
From DSC: In reviewing the item below, I wondered:
How should students — as well as Career Services Groups/Departments within institutions of higher education — respond to the growing use of artificial intelligence (AI) in peoples’ job searches?
My take on it? Each student needs to have a solid online-based footprint — such as offering one’s own streams of content via a WordPress-based blog, one’s Twitter account, and one’s LinkedIn account. That is, each student has to be out there digitally, not just physically. (Though I suspect having face-to-face conversations and interactions will always be an incredibly powerful means of obtaining jobs as well. But if this trend picks up steam, one’s online-based footprint becomes all the more important to finding work.)
The solution appeared in the form of artificial intelligence software from a young company called Interviewed. It speeds the vetting process by providing online simulations of what applicants might do on their first day as an employee. The software does much more than grade multiple-choice questions. It can capture not only so-called book knowledge but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture.That includes assessing which words he or she favors—a penchant for using “please” and “thank you,” for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” claims Freedman, whose company now employs 45 people. “Forty-eight hours later, we’ve hired someone.” It’s not perfect, he says, but it’s faster and better than the human way.
It isn’t just startups using such software; corporate behemoths are implementing it too. Artificial intelligence has come to hiring.
Predictive algorithms and machine learning are fast emerging as tools to identify the best candidates.
Addendum on 6/7/17:
Career site Workey raises $8M 2replace headhunters w/ #AIhttps://t.co/Efi9nvNGGu DC:A foreshadowing or a continuing trend? Either way..
…
Want a job? It may be time to have a chat with a bot — from sfchronicle.com by Nicholas Cheng
Excerpt:
“The future is AI-based recruitment,” Mya CEO Eyal Grayevsky said. Candidates who were being interviewed through a chat couldn’t tell that they were talking to a bot, he added — even though the company isn’t trying to pass its bot off as human.
A 2015 study by the National Bureau of Economic Research surveyed 300,000 people and found that those who were hired by a machine, using algorithms to match them to a job, stayed in their jobs 15 percent longer than those who were hired by human recruiters.
A report by the McKinsey Global Institute estimates that more than half of human resources jobs may be lost to automation, though it did not give a time period for that shift.
“Recruiting jobs will definitely go away,” said John Sullivan, who teaches management at San Francisco State University.
From DSC: There are now more than 12,000+ skills on Amazon’s new platform — Alexa. I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?
What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.
What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.
You can build and host most skills for free using Amazon Web Services (AWS).
Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more
From DSC:
Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.
Hmmm….very interesting times indeed.
Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.(source)
…with the company adding about 100 skills per day. (source)
Microsoft Corp. is hoping to challenge Amazon.com Inc.’s Echo smart speaker for a spot on the kitchen counter with a device from Samsung Electronics Co. that can make phone calls. The Invoke, which will debut this fall, comes more two years after the release of the Echo, which has sold more 11 million units through late last year, according to estimates by Morgan Stanley. It also will compete with Alphabet Inc.’s Google Home, which was released last fall. The voice-controlled Invoke, made by Samsung’s Harman Kardon unit, will use Microsoft’s Cortana digital assistant to take commands.
With Microsoft’s Build developer conference just two days away, the company has revealed one of the most anticipated announcements from the event: A new Cortana-powered speaker made by German audio giant Harman Kardon.
Now, it’s fair to see this speaker for what it is: An answer to the Google Home and Amazon Echo. Both assistant-powered speakers are already in homes across our great nation, listening to your noises, noting your habits, and in general invading your lives under the guise of smart home helpfulness. The new Microsoft speaker, dubbed “Invoke,” one will presumably do the good stuff, let giving you updates on the weather and letting you turn on some soothing jazz for your dog with just a spoken command. Microsoft is also hoping that partnering with Harmon Kardon means its speaker can avoid one of the bigger problems with these devices—their tendency to sound cheap and tinny.
As teased earlier, the Invoke speaker will offer 360-degree speakers, Skype calling, and smart home control all through voice commands. Design-wise, the Invoke strongly resembles Amazon’s Echo that its meant to compete with: both offer a similar cylindrical aluminum shape, light ring, and a seven-microphone array. That said, Harmon Kardon seems to be taking the “speaker” portion of its functionality more seriously than Amazon does, with the Invoke offering three woofers and three tweeters (compared to the Echo, which offers just a single of each driver). Microsoft is also highlighting the Invoke’s ability to make and receive Skype calls to other Skype devices as well as cellphones and landlines, which is an interesting addition to a home assistant.
From DSC: Here we see yet another example of the increasing use of voice as a means of communicating with our computing-related devices. AI-based applications continue to develop.
SAN FRANCISCO (AP) — Google’s voice-activated assistant can now recognize who’s talking to it on Google’s Home speaker.
An update released Thursday enables Home’s built-in assistant to learn the different voices of up to six people, although they can’t all be talking to the internet-connected speaker at the same time.
Distinguishing voices will allow Home to be more personal in some of its responses, depending on who triggers the assistant with the phrase, “OK Google” or “Hey Google.”
For instance, once Home is trained to recognize a user named Joe, the assistant will automatically be able to tell him what traffic is like on his commute, list events on his daily calendar or even play his favorite songs. Then another user named Jane could get similar information from Home, but customized for her.