From DSC:
For the last few years, I’ve been thinking that we need to make learning science-related information more accessible to students, teachers, professors, trainers, and employees — no matter what level they are at.

One idea on how to do this — besides putting posters up in the hallways, libraries, classrooms, conference rooms, cafeterias, etc. — is that we could put a How best to study/learn link in all of the global navigation bars and/or course navigation bars out there in organizations’ course management systems and learning management systems. Learners of all ages could have 24 x 7 x 365, easy, instant access as to how to be more productive as they study and learn about new things.

For example, they could select that link in their CMS/LMS to access information on:

  • Retrieval practice
  • Spacing
  • Interleaving
  • Metacognition
  • Elaboration
  • The Growth Mindset
  • Accessibility-related tools / assistive technologies
  • Links to further resources re: learning science and learning theories

What do you think? If we started this in K12, kept it up in higher ed and vocational programs, and took the idea into the corporate world, valuable information could be relayed and absorbed. This is the kind of information that is highly beneficial these days — as all of us need to be lifelong learners now.

 

We need to use more tools — that go beyond screen sharing — where we can collaborate regardless of where we’re at. [Christian]

From DSC:
Seeing the functionality in Freehand — it makes me once again think that we need to use more tools where faculty/staff/students can collaborate with each other REGARDLESS of where they’re coming in to partake in a learning experience (i.e., remotely or physically/locally). This is also true for trainers and employees, teachers and students, as well as in virtual tutoring types of situations. We need tools that offer functionalities that go beyond screen sharing in order to collaborate, design, present, discuss, and create things.  (more…)

 

The AR Roundup: March 2022 — from linkedin.com by Tom Emrich

Excerpt:

Every month I round up what you may have missed in Augmented Reality including the latest stats, funding news and launch announcements and more. Here is what happened in augmented reality between March 1-31, 2022.

“The metaverse is no longer a single virtual world or even a cluster of virtual worlds. It’s the entire system of virtual and augmented worlds,” Chalmers tells me over Zoom. “Where the old metaverse was like a platform on the internet, the new metaverse is more like the internet as a whole, just the immersive internet.”

~ David Chalmers, Philosopher and Author of Reality+

 

 

The Metaverse Will Radically Change Content Creation Forever — from forbes.com by Falon Fatemi

Excerpt:

Although the metaverse promises to touch nearly every person in our society, there’s one demographic that will almost certainly see disproportionately strong disruption: creators. The metaverse has the potential to fundamentally disrupt the content creation process.

The metaverse is slated to help creators make more interactive and immersive content, thanks in large part to advances in VR and AR. The stakes will be raised as creators will be expected to build more immersive and interactive content than ever before.

Also related/see:

The Amazing Possibilities Of Healthcare In The Metaverse — from forbes.com by Bernard Marr

Excerpts:

What’s generally agreed on, however, is that it’s effectively the next version of the internet – one that will take advantage of artificial intelligence (AI), augmented reality (AR), virtual reality (VR), and ever-increasing connectivity (for example, 5G networks) to create online environments that are more immersive, experiential and interactive than what we have today.

Metaverse involves the convergence of three major technological trends, which all have the potential to impact healthcare individually. Together, though, they could create entirely new channels for delivering care that have the potential to lower costs and vastly improve patient outcomes. These are telepresence (allowing people to be together virtually, even while we’re apart physically), digital twinning, and blockchain (and its ability to let us create a distributed internet).

From DSC:
That last paragraph could likely apply to our future learning ecosystems as well. Lower costs. A greater sense of presence. Getting paid for one’s teaching…then going to learn something new and paying someone else for that new training/education.

 

From DSC:
After checking out the following two links, I created the graphic below:

  1. Readability initiative > Better reading for all. — from Adobe.com
    We’re working with educators, nonprofits, and technologists to help people of all ages and abilities read better by personalizing the reading experience on digital devices.
  2. The Readability Consortium > About page

 


What if one's preferred font style, spacing, leading, etc. could travel with you from site to site? Or perhaps future AR glasses will be able to convert the text that we are looking at for us


Also related/see:

 

Also see:

 

ELC 070: Conversation Design for the Voice User Interface — from theelearningcoach.com (ELC) by Connie Malamed
A Conversation with Myra Roldan

Excerpt (emphasis DSC):

Do you wonder what learning experience designers will be doing in the future? I think one area where we will need to upskill is in conversation design. Think of the possibilities that chatbots and voice interfaces will provide for accessing information for learning and for support in the flow of work. In this episode, I speak with Myra Roldan about conversation design for the voice user interface (VUI). We discuss what makes an effective conversation and the technologies for getting started with voice user interface design.

 

Cisco and Google join forces to transform the future of hybrid work — from blog.webex.com by Kedar Ganta

Cisco and Google join forces to transform the future of hybrid work

Excerpts:

Webex [on 12/7/21] announced the public preview of its native meeting experience for Glass Enterprise Edition 2 (Glass), a lightweight eye wearable device with a transparent display developed by Google. Webex Expert on Demand on Glass provides an immersive collaboration experience that supports natural voice commands, gestures on touchpad, and head movements to accomplish routine tasks.

 

 

Could AR and/or VR enable a massive 3D-based type of “Voicethread?” [Christian]

From DSC:
What if we could quickly submit items for a group to discuss, annotate, and respond to — using whichever media format is available/preferable for a person — like a massive 3D-based Voicethread? What if this type of discussion could be contributed to and accessed via Augmented Reality (AR) and/or via Virtual Reality (VR) types of devices?

It could be a new 3D format that a person could essentially blow all the way up into the size of a billboard. Think, “Honey, I shrunk the kids” type of stuff.  

Input devices might include:

  • Augmented Reality (AR) glasses
  • Virtual Reality (VR) headsets/glasses
  • Scanners
  • Smartphones
  • Tablets
  • Desktops and laptops
  • SmartTVs
  • Other types of input devices

For example, a person could take a picture of a document or something else and then save that image into a new file format that would be vector-based. I say a vector-based file format so that the image could be enlarged to the size of a billboard without losing any resolution (i.e., wouldn’t become grainy; the image would remain crystal clear regardless of how big the image is). I’m thinking here along the lines of “Honey, I shrunk the kids!”

Other thoughts here:

  • The files could be accessible online for attendees of classes or for audiences of presentations/webinars
  • The files could be displayed on the walls of learning/presentation spaces for marking them up
  • One could manipulate the 3D image if that person was using a virtual/immersive environment
  • Users should be able to annotate on those images and/or be able to save such annotations and notes

A question for phase II:
Could this concept also be used if virtual courts take off?

Hmmmm…just thinking out loud.

 

Tools for Building Branching Scenarios — from christytuckerlearning.com by Christy Tucker
When would you use Twine, Storyline, Rise, or other tools for building branching scenarios? It depends on the project and goals.

Excerpt:

When would you use Twine instead of Storyline or other tools for building branching scenarios? An attendee at one of my recent presentations asked me why I’d bother creating something in Twine rather than just storyboarding directly in Storyline, especially if I was using character images. Whether I would use Twine, Storyline, Rise, or something else depends on the project and the goals.

 

10 Best Accessibility Tools For Designers — from hongkiat.com by Hongkiat Lim

Excerpt:

Today is the world of inclusive technology – websites, apps, and tech gadgets that are made for people with different kinds of abilities and inabilities. So when you’re designing a website, you include features that make your design accessible to as many people as possible. And this is where accessibility tools come into play.

Instead of creating everything from scratch, here’s a list of cool accessibility tools for designers. From creating color combinations according to WCAG standards to adding different reading modes to your website, these tools are a must-haves for every designer. Take a look at the list to know about each tool in detail.

 

ARHT Media Inc.
Access The Power Of HoloPresence | Hologram Technology | Holographic Displays | Hologram Events

Excerpt:

ARHT Media mounted a holographic display at the event in Vancouver and had Sunlife’s executive captured and transmitted live as a hologram to the event from our Toronto studio. He was able to see the audience and interact with them in realtime as if he was attending the event and present in the room.

ARHT Media Inc. launches the Holopod | A holograph of the head of global sales appears on stage. Access The Power Of HoloPresence | Hologram Technology | Holographic Displays | Hologram Events

From DSC:

  • Will holographic displays change what we mean by web-based collaboration?
  • Will this be a part of the future learning ecosystems inside of higher education? Inside of the corporate training world? Inside the world of events and webinars?
  • How will this type of emerging technology impact our communications? Levels of engagement?
  • Will this type of thing impact telehealth? Telelegal?
  • How will this impact storytelling? Media? Drama/acting? Games?
  • Will the price come down to where online and blended learning will use this type of thing?
  • Will streams of content be offered by holographic displays?

 

 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 
© 2022 | Daniel Christian