What Will Online Learning Look Like in 10 Years? Zoom Has Some Ideas — from edsurge.com by Stephen Noonoo

Excerpt:

This week at Zoom’s annual conference, Zoomtopia, a trio of education-focused Zoom employees (er, Zoomers?) speculated wildly about what hybrid Zoom learning might look like 10 years from now, given the warp speed advances in artificial intelligence and machine learning expected. Below are highlights of their grandiose, if sometimes vague, vision for the future of learning on Zoom.

Zoom very much sees itself as one day innovating on personalized learning in a substantial way, although beyond breakout rooms and instant translation services, they have few concrete ideas in mind. Mostly, the company says it will be working to add more choices to how teachers can present materials and how students can display mastery to teachers in realtime. They’re bullish on Kahoot-like gamification features and new ways of assessing students, too.

Also see:

An Eighth Grader Was Tired of Being Late to Zoom School. So He Made an App for That. — from edsurge.com by Nadia Tamez-Robledo

“I could not find anything else that exists like this to automatically join meetings at the right times,” says Seth, a high school freshman based in Walnut Creek, Calif. “Reminders are just really easy to ignore. I’ll get a notification maybe five minutes before my meeting, and it’ll just sit there and not do anything. [LinkJoin] interrupts whatever you’re doing and says, ‘Join this meeting. In fact it’s already opening, so better get on it.’”

 

 

Google Earth

Google Earth Lesson Plan — from techlearning.com by Stephanie Smith Budhai

Excerpt:

The 3D interactive online exploration platform Google Earth provides a pathway to endless learning adventures around the globe. For an overview of Google Earth and a breakdown of its unique features, check out How to Use Google Earth for Teaching.

 

 

Graphic of digital audio for the article entitled An Edtech User’s Glossary to Speech Recognition and AI in the Classroom

An Edtech User’s Glossary to Speech Recognition and AI in the Classroom — from edsurge.com by Thomas C. Murray

Per Thomas Murray:

Recently, I collaborated with SoapBox Labs’ Amelia Kelly, the vice president of speech technology there, to create a glossary to help educators and edtech developers better familiarize themselves with speech recognition and make informed decisions about its use in educational settings. Below are some of the key terms that are particularly important, along with an explanation for why those terms matter.

 

 

 

When Should You Use Branching Video Scenarios for eLearning? — from learningsolutionsmag.com by Bill Brandon

Excerpt:

Among the many changes today in the way we think about learning and training is the shift from knowledge transfer to skill development. Scenario-based learning (SBL) and the inclusion of practice with feedback are often overlooked but in many cases more effective approaches to the development of skill and competence.

What’s a scenario?
A scenario is a type of story; it presents learners with a situation in a way that engages them and places them in the situation. Scenarios are a methodology for quickly creating and delivering content to an audience based on needs and feedback. Scenarios are closely related to microlearning, and in fact some microlearning employs short scenarios as the main method of delivery. Learners are able to make decisions, solve problems, apply knowledge, and practice skills. The scenario presents challenges like the ones the learners will face in real-life situations.

The story is important! In his book Scenario-based Learning: Using Stories to Engage Learners, Ray Jimenez says, “The design of scenario-based training requires the craftsmanship of a storyteller, an instructional designer, and a subject matter expert.” 

 

Video Lectures: 4 Tips for Teachers — from techlearning.com by Erik Ofgang
Creating short and engaging video lectures for students is a growing trend at education institutions

Excerpt:

To encourage a more professional type of evergreen video resource, the institution has invested in lecture capture studios, adding five new ones over the past year so each campus has at least one. Some of these studios are DIY, others require a crew, but all enable professors to record lectures in a professional recording environment, complete with green screens and high-quality lighting and audio. The recordings are then edited by the studio team who can help the professor follow the best pedagogical practices for video recordings, including keeping videos short and engaging.

Picture of a recording studio setup to record a professor at IUPUI

From DSC:
A great example of using of teams to create higher-quality, engaging, interactive learning content. 

 

Also see:

Adorama Business Solutions Equips New Classroom Studio for West Coast Baptist College Creative Arts Department — from svconline.com
Workspace Allows Department to Expand Video Production and Editing Course Offerings

Picture of a new classroom studio within a learning space

Picture of a new classroom studio within a learning space

 

Making VR a Reality in the Classroom — from er.educause.edu by Cat Flynn and Peter Frost
Faculty and staff at Southern New Hampshire University piloted virtual reality in an undergraduate psychology course to see if it can be an effective pedagogical tool.

Excerpt:

Meeting the Learning Needs of Gen Z and Beyond
While this study was conducted with current SNHU undergraduates, our team aimed to understand the implications of immersive learning for both today’s students and future learners.

Given Gen Z’s documented love for gaming and their desire for higher education to equip them with problem-solving and practical skills, VR provides a confluence of experiential learning and engagement.

From DSC:
Cost and COVID-19 are major issues here, but this is an interesting article nonetheless.

I think Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) will play a significant role in the future of how we learn. It may take us some time to get there, but I believe that we will.

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
I was thinking about projecting images, animation, videos, etc. from a device onto a wall for all in the room to see.

  • Will more walls of the future be like one of those billboards (that presents two or three different images) and could change surfaces?

One side of the surface would be more traditional (i.e., a sheet wall type of surface). The other side of the surface would be designed to be excellent for projecting images onto it and/or for use by Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR).

Along these lines, here’s another item related to Human-Computer Interaction (HCI):

Mercedes-Benz debuts dashboard that’s one giant touchscreen — from futurism.com

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 
 

Temperament-Inclusive Pedagogy: Helping Introverted and Extraverted Students Thrive in a Changing Educational Landscape — from onlinelearningconsortium.org by Mary R. Fry

Excerpt (emphasis DSC):

So how do we take these different approaches to learning into account and foster a classroom environment that is more inclusive of the needs of both extraverts and introverts? Let’s first distinguish between how extraverts and introverts most prefer to learn, and then discuss ways to meet the needs of both. Extraverts tend to learn through active and social engagement with the material (group work, interactive learning experiences, performing and discussing). Verbalizing typically helps extraverts to think through their ideas and to foster new ones. They often think quickly on their feet and welcome working in large groups. It can be challenging for extraverts to generate ideas in isolation (talking through ideas is often needed) and thus working on solitary projects and writing can be challenging.

In contrast, introverts thrive with solitary/independent work and typically need this time to sort through what they are learning before they can formulate their thoughts and articulate their perspectives. Introverted learners often dislike group work (or at least the group sizes and structures that are often used in the classroom (more on this in a moment)) and find their voice drowned out in synchronous discussions as they don’t typically think as fast as their extroverted counterparts and don’t often speak until they feel they have something carefully thought out to share. Introverted learners are often quite content, and can remain attentive, through longer lectures and presentations and prefer engaging with the material in a more interactive way only after a pause or break.

From DSC:
Could/would a next-generation learning platform that has some Artificial Intelligence (AI) features baked into it — working in conjunction with a cloud-based learner profile — be of assistance here?

That is, maybe a learner could self-select the type of learning that they are: introverted or extroverted. Or perhaps they could use a sliding scaled to mix learning activities up to a certain degree. Or perhaps if one wasn’t sure of their preferences, they could ask the AI-backed system to scan for how much time they spent doing learning activities X, Y, and Z versus learning activities A, B, and C…then AI could offer up activities that meet a learner’s preferences.

(By the way, I love the idea of the “think-ink-pair-share” — to address both extroverted and introverted learners. This can be done digitally/virtually as well as in a face-to-face setting.)

All of this would further assist in helping build an enjoyment of learning. And wouldn’t that be nice? Now that we all need to learn for 40, 50, 60, 70, or even 80 years of our lives?

The 60-Year Curriculum: A Strategic Response to a Crisis

 

Virtual Reality: Realizing the Power of Experience, Excursion and Immersion in the Classroom — from nytimes.com
A framework for teaching with New York Times 360 V.R. videos, plus eight lesson plans for STEM and the humanities.

A Guide for Using NYT VR With Students

  • Getting Started With V.R. in the Classroom
  • Lesson 1: A Mission to Pluto
  • Lesson 2: Meet Three Children Displaced by War and Persecution
  • Lesson 3: Four Antarctic Expeditions
  • Lesson 4: Time Travel Through Olympic History
  • Lesson 5: Decode the Secret Language of Dolphins and Whales
  • Lesson 6: Memorials and Justice
  • Lesson 7: The World’s Biggest Physics Experiment
  • Lesson 8: Journey to the Hottest Place on Earth

 

 
© 2021 | Daniel Christian