The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

A smorgasboard of ideas to put on your organization’s radar! [Christian]

From DSC:
At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR.  I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).

That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats.  So I made a recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.

 


Presentations/recordings:


 

Audio/video recording (187 MB MP4 file)

 

 


Again, I hope you find this information helpful.

Thanks,
Daniel

 

 

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 
 

From DSC:
Here’s an idea that came to my mind the other day as I was walking by a person who was trying to put some books back onto the shelves within our library.

 

danielchristian-books-sensors-m2m-oct2016

 

 

From DSC:
Perhaps this idea is not very timely…as many collections of books will likely continue to be digitized and made available electronically. But preservation is still a goal for many libraries out there.

 

 

Also see:

IoT and the Campus of Things — from er.educause.edu by

Excerpt:

Today, the IoT sits at the peak of Gartner’s Hype Cycle. It’s probably not surprising that industry is abuzz with the promise of streaming sensor data. The oft quoted “50 billion connected devices by 2020!” has become a rallying cry for technology analysts, chip vendors, network providers, and other proponents of a deeply connected, communicating world. What is surprising is that academia has been relatively slow to join the parade, particularly when the potential impacts are so exciting. Like most organizations that manage significant facilities, universities stand to benefit by adopting the IoT as part of their management strategy. The IoT also affords new opportunities to improve the customer experience. For universities, this means the ability to provide new student services and improve on those already offered. Perhaps most surprisingly, the IoT represents an opportunity to better engage a diverse student base in computer science and engineering, and to amplify these programs through meaningful interdisciplinary collaboration.

The potential benefits of the IoT to the academic community extend beyond facilities management to improving our students’ experience. The lowest hanging fruit can be harvested by adapting some of the smart city applications that have emerged. What student hasn’t shown up late to class after circling the parking lot looking for a space? Ask any student at a major university if it would improve their campus experience to be able to check on their smart phones which parking spots were available. The answer will be a resounding “yes!” and there’s nothing futuristic about it. IoT parking management systems are commercially available through a number of vendors. This same type of technology can be adapted to enable students to find open meeting rooms, computer facilities, or café seating. What might be really exciting for students living in campus dormitories: A guarantee that they’ll never walk down three flights of stairs balancing two loads of dirty laundry to find that none of the washing machines are available. On many campuses, the washing machines are already network-connected to support electronic payment; availability reporting is a straightforward extension.

 

 

Also see:

2016 Innovators Awards | A Location-Aware App for Exploring the Library — from campustechnology.com by Meg Lloyd
To help users access rich information resources on campus, the University of Oklahoma Libraries created a mobile app with location-based navigation and “hyperlocal” content.

Category: Education Futurists

Institution: University of Oklahoma

Project: OU Libraries NavApp

Project lead: Matt Cook, emerging technologies librarian

Tech lineup: Aruba, Meridian, RFIP

 

 

Somewhat related:

 

 

 

 

Amazon is winning the race to the future — from bizjournals.com by

Excerpt:

This is the week when artificially intelligent assistants start getting serious.

On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.

But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.

The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.

 

amazon-newdot-oct2016

 

 

100 bot people to watch #BotWatch #1 — from chatbotsmagazine.com

Excerpt:

100 people to watch in the bot space, in no order.

I’ll publish a new list once a month. This one is #1 October 2016.

This is my personal top 100 for people to watch in the bot space.

 

 

Should We Give Chatbots Their Own Personalities? — from re-work.com by Sophie Curtis

Excerpt:

Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.

Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.

Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.

 

 

 

How I built and launched my first chatbot in hours — from chatbotsmagazine.com by Max Pelzner
From idea to MVB (Minimum Viable Bot), and launched in 24 hours!

 

 

 

Developing a Chatbot? Do Not Make These Mistakes! — from chatbotsmagazine.com Hira Saeed

 

 

 

This is what an A.I.-powered future looks like — from venturebeat.com by Grayson Brulte

Excerpt:

Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.

A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.

Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.

 

 

Artificial intelligence created to fold laundry for you — from geek.com by Matthew Humphries

Excerpt:

So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.

Laundry folding is actually a five stage process, including:

Grabbing
Spreading
Recognizing
Folding
Sorting/Storing

The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.

 

 

 

 

 

 

2 days of global chatbot experts at Talkabot in 12 minutes — from chatbotsmagazine.com by Alec Lazarescu

Excerpt:

During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.

As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.

In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.

 

 

 


Addendum:


 

alexaprize-2016

The Alexa Prize (emphasis DSC)

The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.

Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:

Regarding the new Mirror product from Estimote — i.e., the world’s 1st video-enabled beacon — what might the applications look like for active learning classrooms (ALCs)?

That is, could students pre-load their content, then come into an active learning classroom and, upon request, launch an app which would then present their content to the nearest display?

 


 

 

danielchristian-mirror-apps

 

 


 

Also see:

Launching Estimote Mirror – the world’s first video-enabled beacon — from blog.estimote.com

Excerpt:

Today we want to move contextual computing to a completely new level. We are happy to announce our newest product: Estimote Mirror. It’s the world’s first video-enabled beacon. Estimote Mirror can not only communicate with nearby phones and their corresponding apps, but also take content from these apps and display it on any digital screen around you.

 

 

 


 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian