From NPR:

We closed the fifth annual Student Podcast Challenge — more than 2,900 entries!!!  

So today, I wanted to share something that I’m also personally proud of – an elaborate resources page for student podcasting that our team published earlier this year. My big boss Steve Drummond named it “ Sound Advice: The NPR guide to student podcasting.” And, again, this isn’t just for Student Podcast Challenge participants. We have guides from NPR and more for anyone interested in starting a podcast!

Here’s a sampler of some of my favorite resources:

  • Using sound: Teachers, here’s a lovely video you can play for your class! Or for any visual learners, this is a fun watch! In this video, veteran NPR correspondent Don Gonyea walks you through how to build your own recording studio – a pillow fort! (And yes, this is an actual trick we use at NPR!)
  • Voice coaching: Speaking into a microphone is hard, even for our radio veterans. In this video, NPR voice coach Jessica Hansen and our training team share a few vocal exercises that will help you sound more natural in front of a mic! I personally watched this video before recording my first radio story, so I’d highly recommend it for everyone!
  • Life Kit episode on podcasting: In this episode from NPR’s Life Kit , Lauren Migaki, our very own NPR Ed senior producer, brings us tips from podcast producers across NPR, working on all your favorite shows, including Code Switch, Planet Money and more! It’s an awesome listen for a class or on your own!
 

A museum without screens: The Media Museum of Sound and Vision in Hilversum — from inavateonthenet.net

Excerpt:

Re-opened to the public last month after five years of planning and two-and-a-half years of renovations, The Media Museum of Sound and Vision in Hilversum in the Netherlands, is an immersive experience exploring modern media. It’s become a museum that continuously adapts to the actions of its visitors in order to reflect the ever-changing face of media culture.

How we consume media is revealed in five zones in the building: Share, Inform, Sell, Tell and Play. The Media Museum includes more than 50 interactives, with hundreds of hours of AV material and objects from history. The experience uses facial recognition and the user’s own smartphone to make it a personalised museum journey for everyone.

 

A portion of the Media Museum in Hilversum, the Netherlands
Photo from Mike Bink

From DSC:
Wow! There is some serious AV work and creativity in the Media Museum of Sound and Vision!

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

7 best photography lighting kits for camera geeks — from by Atharva Gosavi
Setup the paradise for capturing the perfectness.

 

Exploring Virtual Reality [VR] learning experiences in the classroom — from blog.neolms.com by Rachelle Dene Poth

Excerpt:

With the start of a new year, it is always a great time to explore new ideas or try some new methods that may be a bit different from what we have traditionally done. I always think it is a great opportunity to stretch ourselves professionally, especially after a break or during the spring months.

Finding ways to boost student engagement is important, and what I have found is that by using tools like Augmented Reality (AR) and Virtual Reality (VR), we can immerse students in unique and personalized learning experiences. The use of augmented and virtual reality has increased in K-12 and Higher Ed, especially during the past two years, as educators have sought new ways to facilitate learning and give students the chance to connect more with the content. The use of these technologies is increasing in the workplace, as well.

With all of these technologies, we now have endless opportunities to take learning beyond what has been a confined classroom “space” and access the entire world with the right devices.

 

How Next Gen TV Can Help Close The Digital Divide — from by Erik Ofgang
A new prototype utilizes Next Gen TV and QR codes to allow two-way communication between teachers and students via broadcast.

Excerpts:

Efforts to close the digital divide have ramped up during the pandemic, yet despite creative solutions from district, town, and state officials across the country, between 9 and 12 million students still lack adequate internet access.

However, a new application developed by The National Association of Broadcasters (NAB) could help close this gap by utilizing cutting-edge broadcast TV technology to allow students to receive and respond to work assigned by their teachers.

What Is Next Gen TV and This Application?

Next Gen TV, also known as ATSC 3.0, is a new standard for broadcasting that is currently being launched at broadcast stations throughout the U.S. It is based on internet protocols and allows for targeted broadcasts to be sent as well as more robust datacasting (sending data via broadcasting). Schools can use datacasting to send tests, reading materials, or other assignments that take the form of word documents, excel sheets, and much more. Students can also complete tests and save the work on their own devices.

Also see:

Educational Equity With NextGen TV

Educational Equity With NextGen TV

 

What doors does this type of real-time translation feature open up for learning? [Christian]

From DSC:
For that matter, what does it open up for #JusticeTech? #Legaltech? #A2J? #Telehealth?

 

Learning from the living class room

 
 
 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 

[Re: online-based learning] The Ford Model T from 1910 didn’t start out looking like a Maserati Gran Turismo from 2021! [Christian]

From DSC:
Per Wikipedia, this is a 1910 Model T that was photographed in Salt Lake City:

The Ford Model T didn't start out looking like a Maserati from 2021!

 

This is what online/virtual learning looks like further down the road. Our journey has just begun.

From DSC:
The Ford Model T didn’t start out looking like a Maserati Gran Turismo from 2021! Inventions take time to develop…to be improved…for new and further innovations and experiments to take place.

Thinking of this in terms of online-based learning, please don’t think we’ve reached the end of the road for online-based learning. 

The truth is, we’ve barely begun our journey.

 


Two last thoughts here


1 ) It took *teams* of people to get us to the point of producing a Maserati like this. It will take *teams* of people to produce the Maserati of online-based learning.

2) In terms of online-based learning, it’s hard to say how close to the Maserati that we have come because I/we don’t know how far things will go. But this I do know: We have come a looooonnnnnggggg ways from the late 1990s! If that’s what happened in the last 20 years — with many denying the value of online-based learning — what might the next 5, 10, or 20 years look like when further interest, needs, investments, etc. are added? Then add to all of that the momentum from emerging technologies like 5G, Augmented Reality, Mixed Reality, Virtual Reality, Artificial Intelligence, bots, algorithms, and more!


From DSC:
To drive the point home, here’s an addendum on late 9/29/20:

Mercedes-Benz Shares Video of Avatar Electric Car Prototype

 

How might tools like Microsoft’s new Whiteboard be used in online-based learning? In “learning pods?” [Christian]

The new Microsoft Whiteboard -- how might this be used for online-based learning? Learning pods?

The new Microsoft Whiteboard -- how might this be used for online-based learning? Learning pods?

Questions/reflections from DSC:

  • How might this be used for online-based learning?
  • For “learning pods” and homeschoolers out there? 
  • Will assistants such as the Webex Assistant for Meetings (WAM) be integrated into such tools (i.e., would such tools provide translation, transcripts, closed captioning, and more)?
  • How might this type of tool be used in telehealth? Telelegal? In online-based courtrooms? In presentations?

#onlinelearning #collaboration #education #secondscreen #edtedh #presentations #AI #telehealth #telelegal #emergingtechnologies

 

Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 
© 2024 | Daniel Christian