Brace yourselves: AI is set to explode in higher ed in the next 4 years –from ecampusnews.com by Laura Ascione
As AI grows in the education sector, its impact could be felt in student learning programs, websites and admissions programs

Excerpt:

A new report predicts that artificial intelligence (AI) in the U.S. education sector will grow 47.5 percent through 2021.

The report, Artificial Intelligence Market in the U.S. Education Sector 2017-2021, is based on in-depth market analysis with inputs from industry experts.

One of the major trends surrounding AI and education is AI-powered educational games. Because games have the potential to engage students while teaching them challenging education concepts in an engaging manner, vendors are incorporating AI features into games to enhance their interactivity.

Educational games that include adaptive learning features give students frequent and timely suggestions for a guided learning experience.

 

From DSC:
I can’t say how many AI-based solutions we’ll see within higher education in the next 4 years…it could be a lot…it could be a little. But it will happen. At some point, it will happen.

The use of AI will likely play a key role in a future organization that I’m calling the Next Amazon.com of Higher Education.  AI will likely serve as a foundational piece of what futurist Thomas Frey claims will be the largest company on the internet: “an education-based company that we haven’t heard of yet.” (source)

Web-based learner profiles and blockchain-based technologies should also be on our radars, and are relevant in this discussion.

 

Also see:

Key questions answered in this report

  • What will the market size be in 2021 and what will the growth rate be?
  • What are the key market trends?
  • What is driving this market?
  • What are the challenges to market growth?
  • Who are the key vendors in this market space?
  • What are the market opportunities and threats faced by the key vendors?
  • What are the strengths and weaknesses of the key vendors?

 

 

 

Q&A: Artificial Intelligence Expert Shares His Vision of the Future of Education — from edtechmagazine.com by Amy Burroughs
Artificial intelligence expert Joseph Qualls believes AI can solve some of the biggest challenges facing higher education — and the change is already underway.

Excerpts:

EDTECH: What AI applications might we see in higher education?

QUALLS: You are going to see a massive change in education from K–12 to the university. The thought of having large universities and large faculties teaching students is probably going to go away — not in the short-term, but in the long-term. You will have a student interact with an AI system that will understand him or her and provide an educational path for that particular student. Once you have a personalized education system, education will become much faster and more enriching. You may have a student who can do calculus in the sixth grade because AI realized he had a mathematical sense. That personalized education is going to change everything.

 

 

 

 

 

 

A smorgasboard of ideas to put on your organization’s radar! [Christian]

From DSC:
At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR.  I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).

That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats.  So I made a recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.

 


Presentations/recordings:


 

Audio/video recording (187 MB MP4 file)

 

 


Again, I hope you find this information helpful.

Thanks,
Daniel

 

 

 

HarvardX rolls out new adaptive learning feature in online course — from edscoop.com by Corinne Lestch
Students in MOOC adaptive learning experiment scored nearly 20 percent better than students using more traditional learning approaches.

Excerpt:

Online courses at Harvard University are adapting on the fly to students’ needs.

Officials at the Cambridge, Massachusetts, institution announced a new adaptive learning technology that was recently rolled out in a HarvardX online course. The feature offers tailored course material that directly correlates with student performance while the student is taking the class, as well as tailored assessment algorithms.

HarvardX is an independent university initiative that was launched in parallel with edX, the online learning platform that was created by Harvard and Massachusetts Institute of Technology. Both HarvardX and edX run massive open online courses. The new feature has never before been used in a HarvardX course, and has only been deployed in a small number of edX courses, according to officials.

 

 

From DSC:
Given the growth of AI, this is certainly radar worthy — something that’s definitely worth pulse-checking to see where opportunities exist to leverage these types of technologies.  What we now know of as adaptive learning will likely take an enormous step forward in the next decade.

IBM’s assertion rings in my mind:

 

 

I’m cautiously hopeful that these types of technologies can extend beyond K-12 and help us deal with the current need to be lifelong learners, and the need to constantly reinvent ourselves — while providing us with more choice, more control over our learning. I’m hopeful that learners will be able to pursue their passions, and enlist the help of other learners and/or the (human) subject matter experts as needed.

I don’t see these types of technologies replacing any teachers, professors, or trainers. That said, these types of technologies should be able to help do some of the heavy teaching and learning lifting in order to help someone learn about a new topic.

Again, this is one piece of the Learning from the Living [Class] Room that we see developing.

 

 

 

 

IBM to Train 25 Million Africans for Free to Build Workforce — from by Loni Prinsloo
* Tech giant seeking to bring, keep digital jobs in Africa
* Africa to have world’s largest workforce by 2040, IBM projects

Excerpt:

International Business Machines Corp. is ramping up its digital-skills training program to accommodate as many as 25 million Africans in the next five years, looking toward building a future workforce on the continent. The U.S. tech giant plans to make an initial investment of 945 million rand ($70 million) to roll out the training initiative in South Africa…

 

Also see:

IBM Unveils IT Learning Platform for African Youth — from investopedia.com by Tim Brugger

Excerpt (emphasis DSC):

Responding to concerns that artificial intelligence (A.I.) in the workplace will lead to companies laying off employees and shrinking their work forces, IBM (NYSE: IBM) CEO Ginni Rometty said in an interview with CNBC last month that A.I. wouldn’t replace humans, but rather open the door to “new collar” employment opportunities.

IBM describes new collar jobs as “careers that do not always require a four-year college degree but rather sought-after skills in cybersecurity, data science, artificial intelligence, cloud, and much more.”

In keeping with IBM’s promise to devote time and resources to preparing tomorrow’s new collar workers for those careers, it has announced a new “Digital-Nation Africa” initiative. IBM has committed $70 million to its cloud-based learning platform that will provide free skills development to as many as 25 million young people in Africa over the next five years.

The platform will include online learning opportunities for everything from basic IT skills to advanced training in social engagement, digital privacy, and cyber protection. IBM added that its A.I. computing wonder Watson will be used to analyze data from the online platform, adapt it, and help direct students to appropriate courses, as well as refine the curriculum to better suit specific needs.

 

 

From DSC:
That last part, about Watson being used to personalize learning and direct students to appropropriate courses, is one of the elements that I see in the Learning from the Living [Class]Room vision that I’ve been pulse-checking for the last several years. AI/cognitive computing will most assuredly be a part of our learning ecosystems in the future.  Amazon is currently building their own platform that adds 100 skills each day — and has 1000 people working on creating skills for Alexa.  This type of thing isn’t going away any time soon. Rather, I’d say that we haven’t seen anything yet!

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

And Amazon has doubled down to develop Alexa’s “skills,” which are discrete voice-based applications that allow the system to carry out specific tasks (like ordering pizza for example). At launch, Alexa had just 20 skills, which has reportedly jumped to 5,200 today with the company adding about 100 skills per day.

In fact, Bezos has said, “We’ve been working behind the scenes for the last four years, we have more than 1,000 people working on Alexa and the Echo ecosystem … It’s just the tip of the iceberg. Just last week, it launched a new website to help brands and developers create more skills for Alexa.

Source

 

 

Also see:

 

“We are trying to make education more personalised and cognitive through this partnership by creating a technology-driven personalised learning and tutoring,” Lula Mohanty, Vice President, Services at IBM, told ET. IBM will also use its cognitive technology platform, IBM Watson, as part of the partnership.

“We will use the IBM Watson data cloud as part of the deal, and access Watson education insight services, Watson library, student information insights — these are big data sets that have been created through collaboration and inputs with many universities. On top of this, we apply big data analytics,” Mohanty added.

Source

 

 


 

Also see:

  • Most People in Education are Just Looking for Faster Horses, But the Automobile is Coming — from etale.org by Bernard Bull
    Excerpt:
    Most people in education are looking for faster horses. It is too challenging, troubling, or beyond people’s sense of what is possible to really imagine a completely different way in which education happens in the world. That doesn’t mean, however, that the educational equivalent of the automobile is not on its way. I am confident that it is very much on its way. It might even arrive earlier than even the futurists expect. Consider the following prediction.

 


 

 

 

Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 
 

CES 2017: Intel’s VR visions — from jwtintelligence.com by Shepherd Laughlin
The company showed off advances in volumetric capture, VR live streaming, and “merged reality.”

Excerpt (emphasis DSC):

Live-streaming 360-degree video was another area of focus for Intel. Guests were able to watch a live basketball game being broadcast from Indianapolis, Indiana, choosing from multiple points of view as the action moved up and down the court. Intel “will be among the first technology providers to enable the live sports experience on multiple VR devices,” the company stated.

After taking a 3D scan of the room, Project Alloy can substitute virtual objects where physical objects stand.

 

From DSC:
If viewers of a live basketball game can choose from multiple points of view, why can’t remote learners do this as well with a face-to-face classroom that’s taking place at a university or college?  Learning from the Living [Class] Room.

 

 

 

From CES 2017: Introducing DAQRI’s Smart Glasses™

Excerpt:

Data visualization, guided work instructions, remote expert — for use in a variety of industries: medical, aviation and aerospace, architecture and AEC, lean manufacturing, engineering, and construction.

 

 

 

Third-party Microsoft HoloLens-based mixed reality headsets coming soon, prices to start at $299 — from bgr.in by Deepali Moray
Microsoft has partnered with companies including Dell and Acer which will release their own HoloLens compatible devices.

Excerpt:

The company said that it is teaming up with the likes of Dell, HP, Lenovo and Acer, which will release headsets based on the HoloLens technology. “These new head-mounted displays will be the first consumer offerings utilizing the Mixed Reality capabilities of Windows 10 Creators Update,” a Microsoft spokesperson said. Microsoft’s partner companies for taking the HoloLens technology forward include Dell, HP, Lenovo, Acer, and 3 Glasses. Headsets by these manufacturers will work the same way as the original HoloLens but carry the design and branding of their respective companies. While the HoloLens developer edition costs a whopping $2999 (approximately Rs 2,00,000), the third-party headsets will be priced starting $299 (approximately Rs 20,000).

 

 

Verto Studio 3D App Makes 3D Modeling on HoloLens Easy — from winbuzzer.com by Luke Jones
The upcoming Verto Studio 3D application allows users to create 3D models and interact with them when wearing HoloLens. It is the first software of its kind for mixed reality.

 

 

 

 

Excerpt:
How is The Immersive Experience Delivered?

Tethered Headset VR – The user can participate in a VR experience by using a computer with a tethered VR headset (also known as a Head Mounted Display – HMD) like Facebook’s Oculus Rift, PlayStation VR, or the HTC Vive. The user has the ability to move freely and interact in the VR environment while using a handheld controller to emulate VR hands. But, the user has a limited area in which to move about because they are tethered to a computer.

Non-Tethered Headset VR/AR – These devices are headsets and computers built into one system, so users are free of any cables limiting their movement. These devices use AR to deliver a 360° immersive experience. Much like with Oculus Rift and Vive, the user would be able to move around in the AR environment as well as interact and manipulate objects. A great example of this headset is Microsoft’s HoloLens, which delivers an AR experience to the user through just a headset.

Mobile Device Inserted into a Headgear – To experience VR, the user inserts their mobile device into a Google Cardboard, Samsung Gear 360°, or any other type of mobile device headgear, along with headphones if they choose. This form of VR doesn’t require the user to be tethered to a computer and most VR experiences can be 360° photos, videos, and interactive scenarios.

Mobile VR – The user can access VR without any type of headgear simply by using a mobile device and headphones (optional). They can still have many of the same experiences that they would through Google Cardboard or any other type of mobile device headgear. Although they don’t get the full immersion that they would with headgear, they would still be able to experience VR. Currently, this version of the VR experience seems to be the most popular because it only requires a mobile device. Apps like Pokémon Go and Snapchat’s animated selfie lens only require a mobile device and have a huge number of users.

Desktop VR – Using just a desktop computer, the user can access 360° photos and videos, as well as other VR and AR experiences, by using the trackpad or computer mouse to move their field of view and become immersed in the VR scenario.

New VR – Non-mobile and non-headset platforms like Leap Motion use depth sensors to create a VR image of one’s hands on a desktop computer; they emulate hand gestures in real time. This technology could be used for anything from teaching assembly in a manufacturing plant to learning a step-by-step process to medical training.

VR/AR Solutions

  • Oculus Rift – www.oculus.com
  • HTC Vive – htcvive.com
  • Playstation VR – playstation.com
  • Samsung VR Gear – www.samsung.com
  • Google Daydream – https://vr.google.com/daydream/
  • Leap Motion – www.leapmotion.com
  • Magic Leap – www.magicleap.com
  • Most mobile devices

 

Goggles that are worn, while they are “Oh Myyy” awesome, will not be the final destination of VR/AR. We will want to engage and respond, without wearing a large device over our eyes. Pokémon Go was a good early predictor of how non-goggled experiences will soar.

Elliott Masie

 

 

 

Top 8 VR & AR predictions for 2017 — from haptic.al by Christine Hart

Excerpt:

Education will go virtual
Similar to VR for brand engagement, we’ve seen major potential for delivering hands-on training and distance education in a virtual environment. If VR can take a class on a tour of Mars, the current trickle of educational VR could turn into a flood in 2017.

 

 

 

 

Published on Dec 26, 2016
Top 10 Virtual Reality Predictions For 2017 In vTime. Its been an amazing year for VR and AR. New VR and AR headsets, ground breaking content and lots more. 2017 promises to be amazing as well. Here’s our top 10 virtual reality predictions for the coming year. Filmed in vTime with vCast. Sorry about the audio quality. We used mics on Rift and Vive which are very good on other platforms. We’ve reported this to vTime.

 

 


Addendums


 

  • 5 top Virtual Reality and Augmented Reality technology trends for 2017 — from marxentlabs.com by Joe Bardi
    Excerpt:
    So what’s in store for Virtual Reality and Augmented Reality in 2017? We asked Marxent’s talented team of computer vision experts, 3D artists and engineers to help us suss out what the year ahead will hold. Here are their predictions for the top Virtual Reality and Augmented Reality technology trends for 2017.

 

AR becomes the killer app for smartphones

 

 

 

 

 

 

5 Online Education Trends to Watch in 2017 — from usnews.com by Jordan Friedman
Experts predict more online programs will offer alternative credentials and degrees in specialized fields.

Excerpts:

  1. Greater emphasis on nontraditional credentials
  2. Increased use of big data to measure student performance
  3. Greater incorporation of artificial intelligence into classes
  4. Growth of nonprofit online programs
  5. Online degrees in surprising and specialized disciplines

 

 

The Future of Online Learning Is Offline: What Strava Can Teach Digital Course Designers — from edsurge.com by Amy Ahearn

Excerpt:

I became a Strava user in 2013, around the same time I became an online course designer. Quickly I found that even as I logged runs on Strava daily, I struggled to find the time to log into platforms like Coursera, Udemy or Udacity to finish courses produced by my fellow instructional designers. What was happening? Why was the fitness app so “sticky” as opposed to the online learning platforms?

As a thought experiment, I tried to recast my Strava experience in pedagogical terms. I realized that I was recording hours of deliberate practice (my early morning runs), formative assessments (the occasional speed workout on the track) and even a few summative assessments (races) on the app. Strava was motivating my consistent use by overlaying a digital grid on my existing offline activities. It let me reconnect with college teammates who could keep me motivated. It enabled me to analyze the results of my efforts and compare them to others. I didn’t have to be trapped behind a computer to benefit from this form of digital engagement—yet it was giving me personalized feedback and results. How could we apply the same practices to learning?

I’ve come to believe that one of the biggest misunderstandings about online learning is that it has to be limited to things that can be done in front of a computer screen. Instead, we need to reimagine online courses as something that can enable the interplay between offline activities and digital augmentation.

A few companies are heading that way. Edthena enables teachers to record videos of themselves teaching and then upload these to the platform to get feedback from mentors.

 

 

DIY’s JAM online courses let kids complete hands-on activities like drawing or building with LEGOs and then has them upload pictures of their work to earn badges and share their projects.

 

 

My team at +Acumen has built online courses that let teams complete projects together offline and then upload their prototypes to the NovoEd platform to receive feedback from peers. University campuses are integrating Kaltura into their LMS platforms to enable students to capture and upload videos.

 

 

We need to focus less on building multiple choice quizzes or slick lecture videos and more on finding ways to robustly capture evidence of offline learning that can be validated and critiqued at scale by peers and experts online.

 

 

 

 

 

 
© 2016 Learning Ecosystems