How Amazon’s purchase of Whole Foods highlights the hybrid, ‘omnichannel’ future of higher ed — from edsurge.com by Sean Gallagher

Excerpt:

The expectation that students can integrate their learning experiences across channels is now arriving in higher education. Online education has reached a tipping point where almost 30 percent of all students in U.S. higher education are enrolled in at least one online college course. A significant number of students are already blending their experience across online and offline channels—and numerous data points speak to the evolving value of blending online delivery with physical presence, as suggested by Amazon.

In national surveys of prospective adult students that we have conducted regularly at Northeastern University over recent years, we have consistently found that 60 percent of students prefer a blended or hybrid learning experience. In other words, the majority of the higher education student market is neglected by today’s dominant approach that focuses on offering either online or in-person programs.

Like Amazon, the colleges and universities that are able to deliver across channels—leveraging the combination of physical presence and online algorithms—will be uniquely positioned to take advantage of the in-demand, destination nature of studying in certain cities; the local sourcing of faculty; and proximity to key employers, industries, and job opportunities.

 

Over the next decade, growth and competitive success in higher education will not be a function of who is able to offer online programs. Instead, the successful institutions will be those who can symbiotically integrate their placed-based educational operations and experiences with software-driven analytics, learning science, and machine learning to create a more personalized experience. A more Amazon-like experience.

 

 


From DSC:
A few side comments here:

  1. The future won’t be kind to those institutions who haven’t built up their “street cred” in the digital/virtual space. For example, if you are working at a traditional institution of higher education that doesn’t have online-based programs — nor does it have plans to create such programs in the future — you should get your resume up-to-date and start looking…now.
    .
  2. For data/analytics to have a significant impact and inform strategic or pedagogical decisions, one needs to collect the data. This is not hard to do online. But it’s very difficult — at least at a granular level — to do in a face-to-face environment.
    .
  3. Coursera’s MeetUps around the world — where their learners are encouraged to join study and discussion groups related to their online-only courses — make me wonder about the future of learning spaces and whether your local Starbucks might morph into a learning hub.

 

 

 


 

 

 
 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course ?(meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation ?(using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

From DSC:
Given the increasing use of robotics, automation, and artificial intelligence…how should the question of “What sort of education will you need to be employable in the future?” impact what’s being taught within K-12 & within higher education? Should certain areas within higher education, for example, start owning this research, as well as the strategic planning and whether changes are needed to the core curricula for this increasingly important trend?

The future’s coming at us fast — perhaps faster than we think. It seems prudent to work through some potential scenarios and develop plans for those various scenarios now, rather than react to this trend at some point in the future. If we wait, we’ll be trying to “swim up the backside of the wave” as my wise and wonderful father-in-law would say.

 



The above reflections occurred after I reviewed the posting out at cmrubinworld.com (with thanks to @STEMbyThomas for this resource):

  • The Global Search for Education: What Does My Robot Think?
    Excerpt:
    The Global Search for Education is pleased to welcome Ling Lee, Co-Curator of Robots and the Contemporary Science Manager for Exhibitions at the Science Museum in London, to discuss the impact of robots on our past and future.

 

 

 



 

 

The Classroom of Tomorrow: A Panel Discussion — sponsored by Kaltura

Description:
Technology is changing the way we approach education, rapidly. But what will tomorrow’s classroom actually look like? We’ve invited some leading experts for a spirited debate about what the future holds for educational institutions. From personalization to predictive analytics to portable digital identities, we’ll explore the biggest changes coming. We’ll see how new technologies might interact with changing demographics, business models, drop out rates, and more.

Panelists:

  • David Nirenberg – Dean of the Division of the Social Sciences, University of Chicago
  • Rick Kamal – Chief Technology Officer, Harvard Business School, HBX
  • Gordon Freedman – President, National Laboratory for Education Transformation
  • Michael Markowitz – Entrepreneur and Investor, Education
  • Dr Michal Tsur – Co-founder and President, Kaltura

 

Also see:

  • Roadmap to the Future — by Dr Michal Tsur – Co-founder and President, Kaltura
    What are some of the leading trends emerging from the educational technology space? Michal Tsur takes you on a quick tour of big trends you should be aware of. Then, get a glimpse of Kaltura’s own roadmap for lecture capture and more.

 

 

Regarding the above items, some thoughts from DSC:
Kaltura did a nice job of placing the focus on a discussion about the future of the classroom as well as on some trends to be aware of, and not necessarily on their own company (this was especially the case in regards to the panel discussion). They did mention some things about their newest effort, Kaltura Lecture Capture, but this was kept to a very reasonable amount.

 

 

The first state to offer free community college to nearly every adult – from npr.org by Emily Siner

Excerpt:

The opportunity to go to college for free is more available than ever before. States and cities, in the last year especially, have funded programs for students to go to two-year, and in some cases, four-year, schools.

Tennessee has taken the idea one step further. Community college is already free for graduating high school students. Now Tennessee is first state in the country to offer community college — free of charge — to almost any adult.

Republican Gov. Bill Haslam has long preached the importance of getting adults back to school. He says it’s the only way that more than half of Tennesseans will get a college degree or certificate.

And the program is simple: If you don’t have a degree, and you want one, your tuition is free.

 

From DSC:
I’m listing universities and colleges as some of the selected keywords/categories here as well, as such institutions will certainly be significantly impacted if this becomes a trend.

Increasingly, people need to reinvent themselves in order to remain marketable and employed — and to do so as quickly and cost-effectively as possible. That’s what I want to be involved in/with. But the direction that I would like to personally pursue is the development of a next generation learning platform/paradigm/system that helps people reinvent themselves, quickly and cost-effectively.* A system that offers constant, up-to-date, curated micro-learning streams of content on a lifelong basis. Team-based efforts will leverage this platform within K-12, higher ed, as well as in corporate learning & development space. Such a system will be accessed on the road, at home, in the office, in group study spaces/learning hubs, as well as in the classrooms across the land.

 

*If you or someone you know is working on a state-of-the-art, next generation learning platform, please email me at danielchristian55@gmail.com and let me know. I would greatly appreciate being involved in the development of this kind of learning platform — working on what the various pieces/tools should be and how the various features should work and interoperate. I can plug into other areas as well.

 

 

 

 

 

 

The Slickest Things Google Debuted [on 5/17/17] at Its Big Event — from wired.com by Arielle Pardes

Excerpt (emphasis DSC):

At this year’s Google I/O, the company’s annual developer conference and showcase, CEO Sundar Pichai made one thing very clear: Google is moving toward an AI-first approach in its products, which means pretty soon, everything you do on Google will be powered by machine learning. During Wednesday’s keynote speech, we saw that approach seep into all of Google’s platforms, from Android to Gmail to Google Assistant, each of which are getting spruced up with new capabilities thanks to AI. Here’s our list of the coolest things Google announced today.

 

 

Google Lens Turns Your Camera Into a Search Box — from wired.com by David Pierce

Excerpt:

Google is remaking itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it’s still primarily a search company. And [on 5/17/17] at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or “it’s called Golden Corral,” which you also know. It can automatically find you the hours, or call up the menu, or see if there’s a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you’ll learn that it’s an Elatior Begonia, and that it really needs indirect, bright light to survive. It’s a full-fledged search engine, starting with your camera instead of a text box.

 

 

Google’s AI Chief On Teaching Computers To Learn–And The Challenges Ahead — from fastcompany.com by Harry McCracken
When it comes to AI technologies such as machine learning, Google’s aspirations are too big for it to accomplish them all itself.

Excerpt:

“Last year, we talked about becoming an AI-first company and people weren’t entirely sure what we meant,” he told me. With this year’s announcements, it’s not only understandable but tangible.

“We see our job as evangelizing this new shift in computing,” Giannandrea says.


Matching people with jobs
Pichai concluded the I/O keynote by previewing Google for Jobs, an upcoming career search engine that uses machine learning to understand job listings–a new approach that is valuable, Giannandrea says, even though looking for a job has been a largely digital activity for years. “They don’t do a very good job of classifying the jobs,” Giannandrea says. “It’s not just that I’m looking for part-time work within five miles of my house–I’m looking for an accounting job that involves bookkeeping.”

 

 

Google Assistant Comes to Your iPhone to Take on Siri — from wired.com by David Pierce

 

 

Google rattles the tech world with a new AI chip for all — from wired.com by Cade Metz

 

 

I/O 2017 Recap — from Google.com

 

 

The most important announcements from Google I/O 2017! — from androidcentral.com by Alex Dobie

 

 

Google IO 2017: All the announcements in one place! — from androidauthority.com by Kris Carlon

 

 

 

 

A question/reflection from DSC:


Will #MOOCs provide the necessary data for #AI-based intelligent agents/algorithms? Reminds me of Socratic.org:


 

 


Somewhat related:

 

2017 is the year of artificial intelligence. Here’s why. — from weforum.org

Excerpt (emphasis DSC):

A recent acceleration of innovation in Artificial Intelligence (AI) has made it a hot topic in boardrooms, government, and the media. But it is still early, and everyone seems to have a different view of what AI is.

I have investigated the space over the last few years as a technologist and active investor. What is remarkable now is that things that haven’t worked for decades in the space are starting to work; and we are going beyond just tools and embedded functions.

We are starting to redefine how software and systems are built, what can be programmed, and how users interact. We are creating a world where machines are starting to understand and anticipate what we want to do – and, in the future, will do it for us. In short, we are on the cusp of a completely new computing paradigm. But how did we get here and why now?

 

 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 
© 2016 Learning Ecosystems