One of the major trends surrounding AI and education is AI-powered educational games. Because games have the potential to engage students while teaching them challenging education concepts in an engaging manner, vendors are incorporating AI features into games to enhance their interactivity.
Educational games that include adaptive learning features give students frequent and timely suggestions for a guided learning experience.
From DSC:
I can’t say how many AI-based solutions we’ll see within higher education in the next 4 years…it could be a lot…it could be a little. But it will happen. At some point, it will happen.
The use of AI will likely play a key role in a future organization that I’m calling the Next Amazon.com of Higher Education. AI will likely serve as a foundational piece of what futurist Thomas Frey claims will be the largest company on the internet: “an education-based company that we haven’t heard of yet.” (source)
Web-based learner profiles and blockchain-based technologies should also be on our radars, and are relevant in this discussion.
Also see:
Key questions answered in this report
What will the market size be in 2021 and what will the growth rate be?
What are the key market trends?
What is driving this market?
What are the challenges to market growth?
Who are the key vendors in this market space?
What are the market opportunities and threats faced by the key vendors?
What are the strengths and weaknesses of the key vendors?
EDTECH: What AI applications might we see in higher education?
QUALLS: You are going to see a massive change in education from K–12 to the university. The thought of having large universities and large faculties teaching students is probably going to go away — not in the short-term, but in the long-term. You will have a student interact with an AI system that will understand him or her and provide an educational path for that particular student. Once you have a personalized education system, education will become much faster and more enriching. You may have a student who can do calculus in the sixth grade because AI realized he had a mathematical sense. That personalized education is going to change everything.
From DSC: At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR. I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).
That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats. So I madea recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.
Online courses at Harvard University are adapting on the fly to students’ needs.
Officials at the Cambridge, Massachusetts, institution announced a new adaptive learning technology that was recently rolled out in a HarvardX online course. The feature offers tailored course material that directly correlates with student performance while the student is taking the class, as well as tailored assessment algorithms.
HarvardX is an independent university initiative that was launched in parallel with edX, the online learning platform that was created by Harvard and Massachusetts Institute of Technology. Both HarvardX and edX run massive open online courses. The new feature has never before been used in a HarvardX course, and has only been deployed in a small number of edX courses, according to officials.
From DSC: Given the growth of AI, this is certainly radar worthy — something that’s definitely worth pulse-checking to see where opportunities exist to leverage these types of technologies. What we now know of as adaptive learning will likely take an enormous step forward in the next decade.
IBM’s assertion rings in my mind:
I’m cautiously hopeful that these types of technologies can extend beyond K-12 and help us deal with the current need to be lifelong learners, and the need to constantly reinvent ourselves — while providing us with more choice, more control over our learning. I’m hopeful that learners will be able to pursue their passions, and enlist the help of other learners and/or the (human) subject matter experts as needed.
I don’t see these types of technologies replacing any teachers, professors, or trainers. That said, these types of technologies should be able to help do some of the heavy teaching and learning lifting in order to help someone learn about a new topic.
Again, this is one piece of the Learning from the Living [Class] Room that we see developing.
International Business Machines Corp. is ramping up its digital-skills training program to accommodate as many as 25 million Africans in the next five years, looking toward building a future workforce on the continent. The U.S. tech giant plans to make an initial investment of 945 million rand ($70 million) to roll out the training initiative in South Africa…
Responding to concerns that artificial intelligence (A.I.) in the workplace will lead to companies laying off employees and shrinking their work forces, IBM (NYSE: IBM) CEO Ginni Rometty said in an interview with CNBC last month that A.I. wouldn’t replace humans, but rather open the door to “new collar” employment opportunities.
IBM describes new collar jobs as “careers that do not always require a four-year college degree but rather sought-after skills in cybersecurity, data science, artificial intelligence, cloud, and much more.”
In keeping with IBM’s promise to devote time and resources to preparing tomorrow’s new collar workers for those careers, it has announced a new “Digital-Nation Africa” initiative. IBM has committed $70 million to its cloud-based learning platform that will provide free skills development to as many as 25 million young people in Africa over the next five years.
The platform will include online learning opportunities for everything from basic IT skills to advanced training in social engagement, digital privacy, and cyber protection. IBM added that its A.I. computing wonder Watson will be used to analyze data from the online platform, adapt it, and help direct students to appropriate courses, as well as refine the curriculum to better suit specific needs.
From DSC: That last part, about Watson being used to personalize learning and direct students to appropropriate courses, is one of the elements that I see in the Learning from the Living [Class]Room vision that I’ve been pulse-checking for the last several years. AI/cognitive computing will most assuredly be a part of our learning ecosystems in the future. Amazon is currently building their own platform that adds 100 skills each day — and has 1000 people working on creating skills for Alexa. This type of thing isn’t going away any time soon. Rather, I’d say that we haven’t seen anything yet!
And Amazon has doubled down to develop Alexa’s “skills,” which are discrete voice-based applications that allow the system to carry out specific tasks (like ordering pizza for example). At launch, Alexa had just 20 skills, which has reportedly jumped to 5,200 today with the company adding about 100 skills per day.
In fact, Bezos has said, “We’ve been working behind the scenes for the last four years, we have more than 1,000 people working on Alexa and the Echo ecosystem … It’s just the tip of the iceberg.” Just last week, it launched a new website to help brands and developers create more skills for Alexa.
“We are trying to make education more personalised and cognitive through this partnership by creating a technology-driven personalised learning and tutoring,” Lula Mohanty, Vice President, Services at IBM, told ET. IBM will also use its cognitive technology platform, IBM Watson, as part of the partnership.
“We will use the IBM Watson data cloud as part of the deal, and access Watson education insight services, Watson library, student information insights — these are big data sets that have been created through collaboration and inputs with many universities. On top of this, we apply big data analytics,” Mohanty added.
Most People in Education are Just Looking for Faster Horses, But the Automobile is Coming— from etale.org by Bernard Bull Excerpt:
Most people in education are looking for faster horses. It is too challenging, troubling, or beyond people’s sense of what is possible to really imagine a completely different way in which education happens in the world. That doesn’t mean, however, that the educational equivalent of the automobile is not on its way. I am confident that it is very much on its way. It might even arrive earlier than even the futurists expect. Consider the following prediction.
Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.
Alexa got 4,000 new skills in just the last quarter!
From DSC:
What are the teaching & learning ramifications of this?
By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.
The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).
From DSC: Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies. The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.
This is an invitation to collaborate. In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.
Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us. We argue constantly about what is and is not AI. We certainly cannot agree on how to test for it. We have difficultly deciding what technologies should be included within it. And we struggle with how to evaluate it.
Even so, we are looking at a future in which intelligent technologies are becoming commonplace.
…
With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence. By stepping away from howthese elements are implemented, we can talk about whatthey are and their roles within larger systems.
Also see this article, which contains the graphic below:
From DSC: These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.
11 Ed Tech Trends to Watch in 2017 — from campustechnology.com by Rhea Kelly — with Susan Aldridge, Gerard Au, myself, Marci Powell, & Phil Ventimiglia Five higher ed leaders analyze the hottest trends in education technology this year.
From DSC: I greatly enjoyed this project and appreciated being able to work with Rhea, Susan, Gerard, Marci, and Phil.
CES 2017: Intel’s VR visions — from jwtintelligence.com by Shepherd Laughlin The company showed off advances in volumetric capture, VR live streaming, and “merged reality.”
Excerpt (emphasis DSC):
Live-streaming 360-degree video was another area of focus for Intel. Guests were able to watch a live basketball game being broadcast from Indianapolis, Indiana, choosing from multiple points of view as the action moved up and down the court.Intel “will be among the first technology providers to enable the live sports experience on multiple VR devices,” the company stated.
After taking a 3D scan of the room, Project Alloy can substitute virtual objects where physical objects stand.
From DSC: If viewers of a live basketball game can choose from multiple points of view, why can’t remote learners do this as well with a face-to-face classroom that’s taking place at a university or college? Learning from the Living [Class] Room.
Data visualization, guided work instructions, remote expert — for use in a variety of industries: medical, aviation and aerospace, architecture and AEC, lean manufacturing, engineering, and construction.
The company said that it is teaming up with the likes of Dell, HP, Lenovo and Acer, which will release headsets based on the HoloLens technology. “These new head-mounted displays will be the first consumer offerings utilizing the Mixed Reality capabilities of Windows 10 Creators Update,” a Microsoft spokesperson said. Microsoft’s partner companies for taking the HoloLens technology forward include Dell, HP, Lenovo, Acer, and 3 Glasses. Headsets by these manufacturers will work the same way as the original HoloLens but carry the design and branding of their respective companies. While the HoloLens developer edition costs a whopping $2999 (approximately Rs 2,00,000), the third-party headsets will be priced starting $299 (approximately Rs 20,000).
Verto Studio 3D App Makes 3D Modeling on HoloLens Easy— from winbuzzer.com by Luke Jones The upcoming Verto Studio 3D application allows users to create 3D models and interact with them when wearing HoloLens. It is the first software of its kind for mixed reality.
Excerpt: How is The Immersive Experience Delivered?
Tethered Headset VR – The user can participate in a VR experience by using a computer with a tethered VR headset (also known as a Head Mounted Display – HMD) like Facebook’s Oculus Rift, PlayStation VR, or the HTC Vive. The user has the ability to move freely and interact in the VR environment while using a handheld controller to emulate VR hands. But, the user has a limited area in which to move about because they are tethered to a computer.
Non-Tethered Headset VR/AR – These devices are headsets and computers built into one system, so users are free of any cables limiting their movement. These devices use AR to deliver a 360° immersive experience. Much like with Oculus Rift and Vive, the user would be able to move around in the AR environment as well as interact and manipulate objects. A great example of this headset is Microsoft’s HoloLens, which delivers an AR experience to the user through just a headset.
Mobile Device Inserted into a Headgear – To experience VR, the user inserts their mobile device into a Google Cardboard, Samsung Gear 360°, or any other type of mobile device headgear, along with headphones if they choose. This form of VR doesn’t require the user to be tethered to a computer and most VR experiences can be 360° photos, videos, and interactive scenarios.
Mobile VR – The user can access VR without any type of headgear simply by using a mobile device and headphones (optional). They can still have many of the same experiences that they would through Google Cardboard or any other type of mobile device headgear. Although they don’t get the full immersion that they would with headgear, they would still be able to experience VR. Currently, this version of the VR experience seems to be the most popular because it only requires a mobile device. Apps like Pokémon Go and Snapchat’s animated selfie lens only require a mobile device and have a huge number of users.
Desktop VR – Using just a desktop computer, the user can access 360° photos and videos, as well as other VR and AR experiences, by using the trackpad or computer mouse to move their field of view and become immersed in the VR scenario.
New VR – Non-mobile and non-headset platforms like Leap Motion use depth sensors to create a VR image of one’s hands on a desktop computer; they emulate hand gestures in real time. This technology could be used for anything from teaching assembly in a manufacturing plant to learning a step-by-step process to medical training.
Goggles that are worn, while they are “Oh Myyy” awesome, will not be the final destination of VR/AR. We will want to engage and respond, without wearing a large device over our eyes. Pokémon Go was a good early predictor of how non-goggled experiences will soar.
Education will go virtual
Similar to VR for brand engagement, we’ve seen major potential for delivering hands-on training and distance education in a virtual environment. If VR can take a class on a tour of Mars, the current trickle of educational VR could turn into a flood in 2017.
Published on Dec 26, 2016
Top 10 Virtual Reality Predictions For 2017 In vTime. Its been an amazing year for VR and AR. New VR and AR headsets, ground breaking content and lots more. 2017 promises to be amazing as well. Here’s our top 10 virtual reality predictions for the coming year. Filmed in vTime with vCast. Sorry about the audio quality. We used mics on Rift and Vive which are very good on other platforms. We’ve reported this to vTime.
Addendums
5 top Virtual Reality and Augmented Reality technology trends for 2017— from marxentlabs.com by Joe Bardi Excerpt:
So what’s in store for Virtual Reality and Augmented Reality in 2017? We asked Marxent’s talented team of computer vision experts, 3D artists and engineers to help us suss out what the year ahead will hold. Here are their predictions for the top Virtual Reality and Augmented Reality technology trends for 2017.
5 Online Education Trends to Watch in 2017 — from usnews.com by Jordan Friedman Experts predict more online programs will offer alternative credentials and degrees in specialized fields.
Excerpts:
Greater emphasis on nontraditional credentials
Increased use of big data to measure student performance
Greater incorporation of artificial intelligence into classes
Growth of nonprofit online programs
Online degrees in surprising and specialized disciplines
I became a Strava user in 2013, around the same time I became an online course designer. Quickly I found that even as I logged runs on Strava daily, I struggled to find the time to log into platforms like Coursera, Udemy or Udacity to finish courses produced by my fellow instructional designers. What was happening? Why was the fitness app so “sticky” as opposed to the online learning platforms?
As a thought experiment, I tried to recast my Strava experience in pedagogical terms. I realized that I was recording hours of deliberate practice (my early morning runs), formative assessments (the occasional speed workout on the track) and even a few summative assessments (races) on the app. Strava was motivating my consistent use by overlaying a digital grid on my existing offline activities. It let me reconnect with college teammates who could keep me motivated. It enabled me to analyze the results of my efforts and compare them to others. I didn’t have to be trapped behind a computer to benefit from this form of digital engagement—yet it was giving me personalized feedback and results. How could we apply the same practices to learning?
I’ve come to believe that one of the biggest misunderstandings about online learning is that it has to be limited to things that can be done in front of a computer screen. Instead, we need to reimagine online courses as something that can enable the interplay between offline activities and digital augmentation.
A few companies are heading that way. Edthena enables teachers to record videos of themselves teaching and then upload these to the platform to get feedback from mentors.
DIY’s JAM online courses let kids complete hands-on activities like drawing or building with LEGOs and then has them upload pictures of their work to earn badges and share their projects.
My team at +Acumen has built online courses that let teams complete projects together offline and then upload their prototypes to the NovoEd platform to receive feedback from peers. University campuses are integrating Kaltura into their LMS platforms to enable students to capture and upload videos.
We need to focus less on building multiple choice quizzes or slick lecture videos and more on finding ways to robustly capture evidence of offline learning that can be validated and critiqued at scale by peers and experts online.
Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!
When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?
For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.
Some of the questions that come to my mind:
Why can’t there be more educationally-related games and apps available on this type of platform?
Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
When will potential employers start asking for access to such web-based learner profiles?
Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
Will virtual tutoring be one of the available apps/channels?
Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)
The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
From DSC: Recently, my neighbor graciously gave us his old Honda snowblower, as he was getting a new one. He wondered if we had a use for it. As I’m definitely not getting any younger and I’m not Howard Hughes, I said, “Sure thing! That would be great — it would save my back big time! Thank you!” (Though the image below is not mine, it might as well be…as both are quite old now.)
Anyway…when I recently ran out of gas, I would have loved to be able to take out my iPhone, hold it up to this particular Honda snowblower and ask an app to tell me if this particular Honda snowblower takes a mixture of gas and oil, or does it have a separate container for the oil? (It wasn’t immediately clear where to put the oil in, so I’m figuring it’s a mix.)
But what I would have liked to have happen was:
I launched an app on my iPhone that featured machine learning-based capabilities
The app would have scanned the snowblower and identified which make/model it was and proceeded to tell me whether it needed a gas/oil mix (or not)
If there was a separate place to pour in the oil, the app would have asked me if I wanted to learn how to put oil in the snowblower. Upon me saying yes, it would then have proceeded to display an augmented reality-based training video — showing me where the oil was to be put in and what type of oil to use (links to local providers would also come in handy…offering nice revenue streams for advertisers and suppliers alike).
So several technologies would have to be involved here…but those techs are already here. We just need to pull them together in order to provide this type of useful functionality!
Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.
The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.
Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.
Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.
Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.
DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.
Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.
Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.
…
Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed. … But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.
Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.
“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”
That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.
Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.
Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.
At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.
The Artificial Intelligence Gold Rush— from foresightr.com by Mark Vickers Big companies, venture capital firms and governments are all banking on AI
Excerpt:
Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.
Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
Nvidia: Builds computer chips customized for deep learning.
Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
Shell: Launched a virtual assistant to answer customer questions.
Tesla Motors: Continues to work on self-driving automobile technologies.
Twitter: Created an AI-development team called Cortex and acquired several AI startups.
IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual students in order to engage them through tailored learning approaches.
This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.
As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.
The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging and counter-productive schooling system which has the students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?
From DSC: When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?
What does it mean for:
Students / learners
Faculty members
Teachers
Trainers
Instructional Designers
Interaction Designers
User Experience Designers
Curriculum Developers
…and others?
Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
Excerpt:
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
#Blockchain will likely be used by colleges, universities, bootcamps, MOOCs, and others to feed web-based learner profiles, which will then be queried by people and/or organizations who are looking for freelancers and/or employees to fill their project and/or job-related needs.
As of the end of 2016, Microsoft — with their purchase of LinkedIn — is strongly positioned as being a major player in this new landscape. But it might turn out to be an open-sourced solution/database.
Data mining, algorithm development, and Artificial Intelligence (AI) will likely have roles to play here as well. The systems will likely be able to tell us where we need to grow our skillsets, and provide us with modules/courses to take. This is where the Learning from the Living [Class] Room vision becomes highly relevant, on a global scale. We will be forced to continually improve our skillsets as long as we are in the workforce. Lifelong learning is now a must. AI-based recommendation engines should be helpful here — as they will be able to analyze the needs, trends, developments, etc. and present us with some possible choices (based on our learner profiles, interests, and passions).