Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 

How innovations in voice technology are reshaping education — from edsurge.com by Diana Lee
Voice is the most accessible form you can think of when you think about any interface. In education, it’s already started to take off.

It could be basic questions about, “Am I taking a class to become X?” or “How strong are my skills relative to other people?” An assistant can help with that. It could potentially be a coach, something that follows you the rest of your life for education. I’m excited about that. People that can’t normally get access to this kind of information will get access to it. That’s the future.

From DSC:
The use of voice will likely be a piece of a next-generation learning platform.

Voice will likely be a piece of the next generation learning platform

 

XR for Teaching and Learning — from educause

Key Findings

  • XR technologies are being used to achieve learning goals across domains.
  • Effective pedagogical uses of XR technologies fall into one of three large categories: (1) Supporting skills-based and competency-based teaching and learning, such as nursing education, where students gain practice by repeating tasks. (2) Expanding the range of activities with which a learner can gain hands-on experience—for example, by enabling the user to interact with electrons and electromagnetic fields. In this way, XR enables some subjects traditionally taught as abstract knowledge, using flat media such as illustrations or videos, to be taught as skills-based. (3) Experimenting by providing new functionality and enabling new forms of interaction. For example, by using simulations of materials or tools not easily available in the physical world, learners can explore the bounds of what is possible in both their discipline and with the XR technology itself.
  • Integration of XR into curricula faces two major challenges: time and skills.
  • The adoption of XR in teaching has two major requirements: the technology must fit into instructors’ existing practices, and the cost cannot be significantly higher than that of the alternatives already in use.
  • The effectiveness of XR technologies for achieving learning goals is influenced by several factors: fidelity, ease of use, novelty, time-on-task, and the spirit of experimentation.

XR for Teaching and Learning

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

Accessibility and Usability Resource site from Quality Matters

 

Meet AURS — Your go-to resource for addressing accessibility challenges — from wcetfrontiers.org and Quality Matters

Excerpt:

Accessibility is not only one of the main areas of focus for WCET, but a consistent issue and opportunity for higher education institutions. In order to support faculty, instructional designers, and others who work in the area, Quality Matters, a WCET member, created a new resource site for educators to get information on how to address key accessibility and usability concerns. Today’s post introduces the new website, AURS, and reviews the development process for the site and the resources.

 

 

Top eLearning Gamification Companies 2019 — from elearningindustry.com by Christopher Pappas

Excerpt:

The Top Performing eLearning Gamification Companies 2019
As community leaders, here at eLearning Industry, we have evaluated hundreds of eLearning content development companies in the past. As we are constantly on the lookout for new advancements and trends in the eLearning field that are relevant to the industry, we decided to take a very close look at outstanding providers of gamification. We have focused on prestige, influence, application of gamification tools, activity in the eLearning field, gamification innovations, and many more subcategories.

For the list of the Top eLearning Gamification Companies 2019, we chose and ranked the best gamification companies based on the following 7 criteria:

  • Gamification eLearning quality
  • Customer reviews
  • eLearning expertise
  • Niche specialization on gamification
  • Gamification industry innovation
  • Company’s social responsibility
  • Gamification features and capabilities
 

Per Jacob Strom at HeraldPR.com:

KreatAR, a subsidiary of The Glimpse Group, is helping change the way students and teachers are using augmented reality technology with PostReality, to help make learning more interactive with poster boards.

See:

 


Also see:

 

 
 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 
 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2020 | Daniel Christian