What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course ?(meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation ?(using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Web host agrees to pay $1m after it’s hit by Linux-targeting ransomware — from arstechnica.com by Dan Goodin
Windfall payment by poorly secured host is likely to inspire new ransomware attacks.

Excerpt (emphasis above and below by DSC):

A Web-hosting service recently agreed to pay $1 million to a ransomware operation that encrypted data stored on 153 Linux servers and 3,400 customer websites, the company said recently.

The South Korean Web host, Nayana, said in a blog post published last week that initial ransom demands were for five billion won worth of Bitcoin, which is roughly $4.4 million. Company negotiators later managed to get the fee lowered to 1.8 billion won and ultimately landed a further reduction to 1.2 billion won, or just over $1 million. An update posted Saturday said Nayana engineers were in the process of recovering the data. The post cautioned that that the recovery was difficult and would take time.

 

 

 

AIG teams with IBM to use blockchain for ‘smart’ insurance policy — from reuters.com by Suzanne Barlyn

Excerpt (emphasis DSC):

Insurer American International Group Inc has partnered with International Business Machines Corp to develop a “smart” insurance policy that uses blockchain to manage complex international coverage, the companies said on Wednesday.

AIG and IBM completed a pilot of a so-called “smart contract” multi-national policy for Standard Chartered Bank PLC which the companies said is the first of its kind using blockchain’s digital ledger technology.

IBM has been partnering with leading companies in various industries, including Danish transport company Maersk, to create blockchain-based products that can streamline complex international dealings across sectors.

 

Blockchain technology, which powers the digital currency bitcoin, enables data sharing across a network of individual computers. It has gained worldwide popularity due to its usefulness in recording and keeping track of assets or transactions across all industries.

 

 

From DSC:
Why post this item? Because IBM and others are experimenting with and investing millions into blockchain-based technologies; and because the manner in which credentials are stored and recognized will most likely be significantly impacted by blockchain-based technologies. Earlier this year at the Next Generation Learning Spaces Conference in San Diego, I mentioned that this topic of blockchain-based technologies is something that should be on our radars within higher education.

 

 

 

 

 

 

From DSC:
Given the increasing use of robotics, automation, and artificial intelligence…how should the question of “What sort of education will you need to be employable in the future?” impact what’s being taught within K-12 & within higher education? Should certain areas within higher education, for example, start owning this research, as well as the strategic planning and whether changes are needed to the core curricula for this increasingly important trend?

The future’s coming at us fast — perhaps faster than we think. It seems prudent to work through some potential scenarios and develop plans for those various scenarios now, rather than react to this trend at some point in the future. If we wait, we’ll be trying to “swim up the backside of the wave” as my wise and wonderful father-in-law would say.

 



The above reflections occurred after I reviewed the posting out at cmrubinworld.com (with thanks to @STEMbyThomas for this resource):

  • The Global Search for Education: What Does My Robot Think?
    Excerpt:
    The Global Search for Education is pleased to welcome Ling Lee, Co-Curator of Robots and the Contemporary Science Manager for Exhibitions at the Science Museum in London, to discuss the impact of robots on our past and future.

 

 

 



 

 

From Apple itself:

 

  • HomePod reinvents music in the home
    San Jose, California — Apple today announced HomePod, a breakthrough wireless speaker for the home that delivers amazing audio quality and uses spatial awareness to sense its location in a room and automatically adjust the audio. Designed to work with an Apple Music subscription for access to over 40 million songs, HomePod provides deep knowledge of personal music preferences and tastes and helps users discover new music.

    As a home assistant, HomePod is a great way to send messages, get updates on news, sports and weather, or control smart home devices by simply asking Siri to turn on the lights, close the shades or activate a scene. When away from home, HomePod is the perfect home hub, providing remote access and home automations through the Home app on iPhone or iPad.

 

 

 

 



Also see:



 

The 8 biggest announcements from Apple WWDC 2017 — from theverge.copm by Natt Garun

Excerpt:

Apple introduced a new ARKit to let developers build augmented reality apps for the iPhone. The kit can help find planes, track motion, and estimate scale and ambient lighting. Popular apps like Pokémon Go will also use ARKit for improved real-time renders.

Rather than requiring external hardware like Microsoft’s HoloLens, Apple seems to be betting on ARKit to provide impressive quality imaging through a device most people already own. We’ll know more on how the quality actually compares when we get to try it out ourselves.

 

 

Everything Apple Announced Today at WWDC — from wired.com by Arielle Pardes

Excerpt:

On Monday, over 5,000 developers packed the San Jose Convention Center to listen to Tim Cook and other Apple execs share the latest innovations out of Cupertino. Over the course of two and a half hours, the company unveiled its most powerful Mac yet, a long-awaited Siri speaker, and tons of new software upgrades across all of the Apple platforms, from your iPhone to your Apple Watch. Missed the keynote speech? Here’s a recap of the nine biggest announcements from WWDC 2017.

 

 

Apple is launching an iOS ‘ARKit’ for augmented reality apps — from theverge.com by Adi Robertson

Excerpt:

Apple has announced a tool it calls ARKit, which will provide advanced augmented reality capabilities on iOS. It’s supposed to allow for “fast and stable motion tracking” that makes objects look like they’re actually being placed in real space, instead of simply hovering over it.

 

 

Apple is finally bringing virtual reality to the Mac – from businessinsider.com by Matt Weinberger

Excerpt:

Apple is finally bringing virtual reality support to its Mac laptops and desktops, bringing the company up to speed with what many see as the next phase of computing.

At Monday’s Apple WWDC event in San Jose, the company announced that with this fall’s MacOS High Sierra update, the Mac will support external graphics hardware — meaning you can plug in a box and greatly increase your machine’s graphical capabilities.

In turn, that external hardware will give the Mac the boost it needs to support virtual reality headsets, which require superior performance to create an immersive experience.

 

 

The Slickest Things Google Debuted [on 5/17/17] at Its Big Event — from wired.com by Arielle Pardes

Excerpt (emphasis DSC):

At this year’s Google I/O, the company’s annual developer conference and showcase, CEO Sundar Pichai made one thing very clear: Google is moving toward an AI-first approach in its products, which means pretty soon, everything you do on Google will be powered by machine learning. During Wednesday’s keynote speech, we saw that approach seep into all of Google’s platforms, from Android to Gmail to Google Assistant, each of which are getting spruced up with new capabilities thanks to AI. Here’s our list of the coolest things Google announced today.

 

 

Google Lens Turns Your Camera Into a Search Box — from wired.com by David Pierce

Excerpt:

Google is remaking itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it’s still primarily a search company. And [on 5/17/17] at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or “it’s called Golden Corral,” which you also know. It can automatically find you the hours, or call up the menu, or see if there’s a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you’ll learn that it’s an Elatior Begonia, and that it really needs indirect, bright light to survive. It’s a full-fledged search engine, starting with your camera instead of a text box.

 

 

Google’s AI Chief On Teaching Computers To Learn–And The Challenges Ahead — from fastcompany.com by Harry McCracken
When it comes to AI technologies such as machine learning, Google’s aspirations are too big for it to accomplish them all itself.

Excerpt:

“Last year, we talked about becoming an AI-first company and people weren’t entirely sure what we meant,” he told me. With this year’s announcements, it’s not only understandable but tangible.

“We see our job as evangelizing this new shift in computing,” Giannandrea says.


Matching people with jobs
Pichai concluded the I/O keynote by previewing Google for Jobs, an upcoming career search engine that uses machine learning to understand job listings–a new approach that is valuable, Giannandrea says, even though looking for a job has been a largely digital activity for years. “They don’t do a very good job of classifying the jobs,” Giannandrea says. “It’s not just that I’m looking for part-time work within five miles of my house–I’m looking for an accounting job that involves bookkeeping.”

 

 

Google Assistant Comes to Your iPhone to Take on Siri — from wired.com by David Pierce

 

 

Google rattles the tech world with a new AI chip for all — from wired.com by Cade Metz

 

 

I/O 2017 Recap — from Google.com

 

 

The most important announcements from Google I/O 2017! — from androidcentral.com by Alex Dobie

 

 

Google IO 2017: All the announcements in one place! — from androidauthority.com by Kris Carlon

 

 

 

 

2017 is the year of artificial intelligence. Here’s why. — from weforum.org

Excerpt (emphasis DSC):

A recent acceleration of innovation in Artificial Intelligence (AI) has made it a hot topic in boardrooms, government, and the media. But it is still early, and everyone seems to have a different view of what AI is.

I have investigated the space over the last few years as a technologist and active investor. What is remarkable now is that things that haven’t worked for decades in the space are starting to work; and we are going beyond just tools and embedded functions.

We are starting to redefine how software and systems are built, what can be programmed, and how users interact. We are creating a world where machines are starting to understand and anticipate what we want to do – and, in the future, will do it for us. In short, we are on the cusp of a completely new computing paradigm. But how did we get here and why now?

 

 

 

From DSC:
There are now more than 12,000+ skills on Amazon’s new platform — Alexa.  I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?


 

From this Alexa Skills Kit page:

What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.

What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

You can build and host most skills for free using Amazon Web Services (AWS).

 

 

 


 

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs [Meghan Bogardus Cortez at edtechmagazine.com]

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs — from edtechmagazine.com by Meghan Bogardus Cortez
These administrative all-stars, IT gurus, teachers and community experts understand how the latest technology is changing the nature of education.

Excerpt:

With summer break almost here, we’ve got an idea for how you can use some of your spare time. Take a look at the Dean’s List, our compilation of the must-read blogs that seek to make sense of higher education in today’s digital world.

Follow these education trailblazers for not-to-be-missed analyses of the trends, challenges and opportunities that technology can provide.

If you’d like to check out the Must-Read IT blogs from previous years, view our lists from 2016, 2015, 2014 and 2013.

 

 



From DSC:
I would like to thank Tara Buck, Meghan Bogardus Cortez, D. Frank Smith, Meg Conlan, and Jimmy Daly and the rest of the staff at EdTech Magazine for their support of this Learning Ecosystems blog through the years — I really appreciate it. 

Thanks all for your encouragement through the years!



 

 

 

 

What to look for when hiring an entry-level data scientist? — from datasciencecentral.com

Excerpt:

What I look for the most is some signal that the junior data scientist:

  1. Has the drive and determination to be a self-directed learner
  2. They understand the fundamentals of “enough” programming,
  3. They understand how to analyze data when the goals and metrics are not explicit or time boxed.

Let’s put aside the need for some level of formal training, that is a non-negotiable baseline. You have to have enough understanding of mathematics and statistics to know when you are getting yourself into trouble, you have to understand data management practice enough to understand how to access data, and you have to understand enough about machine learning to make the appropriate series of tradeoffs in model development and validation. That is table stakes, however what makes one candidate stand out above the others is everything else surrounding these core concepts.

 

Also see:

Image from Ronald van Loon

 

 

 

 
© 2016 Learning Ecosystems