The Future of Higher Ed Viewed from Cape Town, South Africa — from eliterate.us by Michael Feldstein

Excerpt:

A while back, I had the pleasure of being interviewed by friends at the University of Cape Town about the future of higher education as part of a short video they were compiling for their senior leadership. Here’s what they came up with:

The University of Cape Town in South Africa

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
I was thinking about projecting images, animation, videos, etc. from a device onto a wall for all in the room to see.

  • Will more walls of the future be like one of those billboards (that presents two or three different images) and could change surfaces?

One side of the surface would be more traditional (i.e., a sheet wall type of surface). The other side of the surface would be designed to be excellent for projecting images onto it and/or for use by Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR).

Along these lines, here’s another item related to Human-Computer Interaction (HCI):

Mercedes-Benz debuts dashboard that’s one giant touchscreen — from futurism.com

 

From DSC:
Videoconferencing vendors out there:

  • Have you done any focus group tests — especially within education — with audio-based or digital video-based versions of emoticons?
    .
  • So instead of clicking on an emoticon as feedback, one could also have some sound effects or movie clips to choose from as well!
    .

To the videoconferencing vendors out there -- could you give us what DJ's have access to?

I’m thinking here of things like DJ’s might have at their disposal. For example, someone tells a bad joke and you hear the drummer in the background:

Or a team loses the spelling-bee word, and hears:

Or a professor wants to get the classes attention as they start their 6pm class:

I realize this could backfire big time…so it would have to be an optional feature that a teacher, professor, trainer, pastor, or a presenter could turn on and off. (Could be fun for podcasters too!)

It seems to me that this could take
engagement to a whole new level!

 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 
 

Cisco to Acquire Best-in-Class Audience Interaction Company, Slido — from blogs.cisco.com

Excerpt:

At Cisco, our goal is to deliver Webex experiences that are 10X better than in-person interactions and an important part of that is making these experiences inclusive and equal for all. We are making sure everyone is included and part of the conversation, whether working from their dining table or in an office building.

Today, I’m pleased to announce Cisco’s intent to acquire privately-held Slido s.r.o., a technology company that provides a best-in-class audience interaction platform. Slido technology enables higher levels of user engagement — before, during and after meetings and events. The Slido technology will be part of the Cisco Webex platform and enhance Cisco’s ability to offer new levels of inclusive audience engagement across both in-person and virtual experiences.

Soon, meeting owners will be able to:

    • Create engaging and dynamic participant experiences with dynamic Q&As and polls using graphic visual representations to express the results clearly.
    • Get real-time critical insights and understanding before, during and after meetings and events, from all-hands and townhalls to conferences and training sessions.
    • Obtain inclusive feedback so every voice is heard.
    • Give presenters the confidence that they are connecting in a meaningful way with their audience.

Also see:

Slido to be acquired by Cisco to help transform virtual meetings — from blog.sli.do

 

The Journal 2020 Award Winners

THE Journal 2020 New Product Award Winners

For THE Journal’s first-ever New Product Award program, judges selected winners in 30 categories spanning all aspects of technology innovations in K–12 education, from the classroom to the server room and beyond. We are proud to honor these winners for their outstanding contributions to the institution of education, in particular at this time of upheaval in the way education is being delivered to the nation’s 50 million students.

 

From DSC:
Our oldest daughter showed me a “Bitmoji Classroom” that her mentor teacher — Emily Clay — uses as her virtual classroom. Below are some snapshots of the Google Slides that Emily developed based on the work of:

  • Kayla Young (@bitmoji.kayla)
  • MaryBeth Thomas 
  • Ms. Smith 
  • Karen Koch
  • The First Grade Creative — by C. Verddugo

My hats off to all of these folks whose work laid the foundations for this creative, fun, engaging, easy-to-follow virtual classroom for a special education preschool classroom — complete with ties to videoconferencing functionalities from Zoom. Emily’s students could click on items all over the place — they could explore, pursue their interests/curiosities/passions. So the snapshots below don’t offer the great interactivity that the real deal does.

Nice work Emily & Company! I like how you provided more choice, more control to your students — while keeping them engaged! 

A snapshot of a Bitmoji Classroom created by Emily Clay

 

A snapshot of a Bitmoji Classroom created by Emily Clay

From DSC:
I also like the idea of presenting this type of slide (immediately below, and students’ names have been blurred for privacy’s sake) prior to entering a videoconferencing session where you are going to break out the students into groups. Perhaps that didn’t happen in Emily’s class…I’m not sure, but in other settings, it would make sense to share one’s screen right before sending the students to those breakout rooms and show them that type of slide (to let them know who will be in their particular breakout group).

The students in the different breakout sessions could then collaboratively work on Google Docs, Sheets, or Slides and you could watch their progress in real-time!

A snapshot of a Bitmoji Classroom created by Emily Clay

 

A snapshot of a Bitmoji Classroom created by Emily Clay

 

A snapshot of a Bitmoji Classroom created by Emily Clay

 

A snapshot of a Bitmoji Classroom created by Emily Clay

 

A snapshot of a Bitmoji Classroom created by Emily Clay

Also see:

 

Maths mastery through stop-motion animation — from innovatemyschool.com by Rachel Cully

Do you want your learners to be resilient, confident mathematicians with secure conceptual understanding and a love of Maths? Well, come with me to a land of stories and watch the magic unfold.

Maths mastery through stop-motion animation -- by Rachel Cully

Also see:

 
 

Temperament-Inclusive Pedagogy: Helping Introverted and Extraverted Students Thrive in a Changing Educational Landscape — from onlinelearningconsortium.org by Mary R. Fry

Excerpt (emphasis DSC):

So how do we take these different approaches to learning into account and foster a classroom environment that is more inclusive of the needs of both extraverts and introverts? Let’s first distinguish between how extraverts and introverts most prefer to learn, and then discuss ways to meet the needs of both. Extraverts tend to learn through active and social engagement with the material (group work, interactive learning experiences, performing and discussing). Verbalizing typically helps extraverts to think through their ideas and to foster new ones. They often think quickly on their feet and welcome working in large groups. It can be challenging for extraverts to generate ideas in isolation (talking through ideas is often needed) and thus working on solitary projects and writing can be challenging.

In contrast, introverts thrive with solitary/independent work and typically need this time to sort through what they are learning before they can formulate their thoughts and articulate their perspectives. Introverted learners often dislike group work (or at least the group sizes and structures that are often used in the classroom (more on this in a moment)) and find their voice drowned out in synchronous discussions as they don’t typically think as fast as their extroverted counterparts and don’t often speak until they feel they have something carefully thought out to share. Introverted learners are often quite content, and can remain attentive, through longer lectures and presentations and prefer engaging with the material in a more interactive way only after a pause or break.

From DSC:
Could/would a next-generation learning platform that has some Artificial Intelligence (AI) features baked into it — working in conjunction with a cloud-based learner profile — be of assistance here?

That is, maybe a learner could self-select the type of learning that they are: introverted or extroverted. Or perhaps they could use a sliding scaled to mix learning activities up to a certain degree. Or perhaps if one wasn’t sure of their preferences, they could ask the AI-backed system to scan for how much time they spent doing learning activities X, Y, and Z versus learning activities A, B, and C…then AI could offer up activities that meet a learner’s preferences.

(By the way, I love the idea of the “think-ink-pair-share” — to address both extroverted and introverted learners. This can be done digitally/virtually as well as in a face-to-face setting.)

All of this would further assist in helping build an enjoyment of learning. And wouldn’t that be nice? Now that we all need to learn for 40, 50, 60, 70, or even 80 years of our lives?

The 60-Year Curriculum: A Strategic Response to a Crisis

 
© 2024 | Daniel Christian