Student Preference for Online Learning Up 220% Since Pre-Pandemic — from campustechnology.com by Rhea Kelly

Excerpt:

According to a recent Educause survey, the number of students expressing preferences for courses that are mostly or completely online has increased 220% since the onset of the pandemic, from 9% in 2020 (before March 11) to 29% in 2022. And while many students still prefer learning mostly or completely face-to-face, that share has dropped precipitously from 65% in 2020 to 41% this year.

“These data point to student demand for online instructional elements, even for fully face-to-face courses,” Educause stated.

Also relevant/see:

  • A Surge in Young Undergrads, Fully Online — from insidehighered.com by Susan D’Agostino
    Tens of thousands of 18- to 24-year-olds are now enrolling at Western Governors, Southern New Hampshire and other national online institutions. Does this represent a change in student behavior?
 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Artificial Intelligence in Video Production — from provideocoalition.com by Iain Anderson

Introduction

Artificial Intelligence has promised so much for so long, and in many ways, it’s been less than perfect. Siri doesn’t quite understand what you’re asking; Google Translate can’t yet seamlessly bridge the gap between languages, and navigation software still doesn’t always offer an accurate route. But it’s getting pretty good. While a variety of AI-based technologies have been improving steadily over the past several years, it’s the recent giant strides made in image generation that’s potentially groundbreaking for post production professionals. Here, I’ll take you through the ideas behind the tech, along with specific examples of how modern, smart technology will change your post workflow tomorrow, or could today.

Also relevant/see:

 

OpenAI Says DALL-E Is Generating Over 2 Million Images a Day—and That’s Just Table Stakes — from singularityhub.com by Jason Dorrier

Excerpt:

The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.

Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.

And that was before OpenAI opened DALL-E 2 to everyone.

 


From DSC:
Further on down that Tweet is this example image — wow!
.

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

.


On the video side of things, also relevant/see:

Meta’s new text-to-video AI generator is like DALL-E for video — from theverge.com by James Vincent
Just type a description and the AI generates matching footage

A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta


From DSC:
Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?


 

Lessons Learned from Six Years of Podcasting — from derekbruff.org by Derek Bruff

Excerpt:

Last month on Twitter, I shared some of the many, many things I’ve learned from podcast guests over the last six years. I encourage you to check out that thread and listen to a few of my favorite episodes. Here on the blog, I’d like to share a few more general reflections about what I learned through producing Leading Lines.

Finally, one lesson that the podcast reinforced for me is that faculty and other instructors want to hear stories. Sure, a peer-reviewed journal article on the impact of some teaching practice is useful, but some of those can focus too much on assessment and not enough on the practice itself. Hearing a colleague talk about their teaching, the choices they’ve made, why they made those choices, what effects those choices have had on student learning… that can be both inspirational and intensely practical for those seeking to improve their teaching. A big part of Leading Lines was finding instructors with compelling stories and then letting them shine during our interviews.

Speaking of digital audio, also relevant/see:

With Audiobooks Launching in the U.S. Today, Spotify Is the Home for All the Audio You Love — from newsroom.spotify.com by

Excerpt:

Adding an entirely new content format to our service is no small feat. But we’ve done it before with podcasts, and we’re excited to now do the same with audiobooks.

Just as we did with podcasting, this will introduce a new format to an audience that has never before consumed it, unlocking a whole new segment of potential listeners. This also helps us support even more kinds of creators and connect them with fans that will love their art—which makes this even more exciting.


Addendum on 9/30/22:


 

Moving from program effectiveness to organizational implications — from chieflearningofficer.com by Rachel Walter

Excerpt:

To summarize, begin by ensuring that you are able to add business value. Do this by designing solutions specific to the known business problem to achieve relevance through adding value. Build credibility through these successes and expand your network and business acumen. Use the expanding business knowledge to begin gathering information about leading and lagging indicators of business success. Build some hypotheses and start determining where to find data related to your hypotheses.

More than looking at data points, look for trends across the data and communicate these trends to build upon them. It’s critical to talk about your findings and communicate what you are seeing. By continuing to drive business value, you can help others stop looking at data that does not truly matter in favor of data that directly affects the organization’s goals.

Also, from the corporate learning ecosystem:

Creating Better Video For Learning, Part 1 — from elearningindustry.com by Patti Shank

Summary: 

This is the first article in a series about what evidence (research) says about creating better video for learning. It discusses the attributes of media and technologies for digital or blended instruction, selecting content and social interactions, and the strengths and challenges of video.

 

To Improve Outcomes for Students, We Must Improve Support for Faculty — from campustechnology.com by Dr. David Wiley
The doctoral programs that prepare faculty for their positions often fail to train them on effective teaching practices. We owe it to our students to provide faculty with the professional development they need to help learners realize their full potential.

Excerpts:

Why do we allow so much student potential to go unrealized? Why are well-researched, highly effective teaching practices not used more widely?

The doctoral programs that are supposed to prepare them to become faculty in physics, philosophy, and other disciplines don’t require them to take a single course in effective teaching practices. 

The entire faculty preparation enterprise seems to be caught in a loop, unintentionally but consistently passing on an unawareness that some teaching practices are significantly more effective than others. How do we break this cycle and help students realize their full potential as learners?

From DSC:
First of all, I greatly appreciate the work of Dr. David Wiley. His career has been dedicated to teaching and learning, open educational resources, and more. I also appreciate and agree with what David is saying here — i.e., that professors need to be taught how to teach as well as what we know about how people learn at this point in time. 

For years now, I’ve been (unpleasantly) amazed that we hire and pay our professors primarily for their research capabilities — vs. their teaching competence. At the same time, we continually increase the cost of tuition, books, and other fees. Students have the right to let their feet do the walking. As the alternatives to traditional institutions of higher education increase, I’m quite sure that we’ll see that happen more and more.

While I think that training faculty members about effective teaching practices is highly beneficial, I also think that TEAM-BASED content creation and delivery will deliver the best learning experiences that we can provide. I say this because multiple disciplines and specialists are involved, such as:

  • Subject Matter Experts (i.e., faculty members)
  • Instructional Designers
  • Graphic Designers
  • Web Designers
  • Learning Scientists; Cognitive Learning Researchers
  • Audio/Video Specialists  and Learning Space Designers/Architects
  • CMS/LMS Administrators
  • Programmers
  • Multimedia Artists who are skilled in working with digital audio and digital video
  • Accessibility Specialists
  • Librarians
  • Illustrators and Animators
  • and more

The point here is that one person can’t do it all — especially now that the expectation is that courses should be offered in a hybrid format or in an online-based format. For a solid example of the power of team-based content creation/delivery, see this posting.

One last thought/question here though. Once a professor is teaching, are they open to working with and learning from the Instructional Designers, Learning Scientists, and/or others from the Teaching & Learning Centers that do exist on their campus? Or do they, like many faculty members, think that such people are irrelevant because they aren’t faculty members themselves? Oftentimes, faculty members look to each other and don’t really care what support is offered (unless they need help with some of the technology.)


Also relevant/see:


 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

Top Tools for Learning 2022 [Jane Hart]

Top Tools for Learning 2022

 

Top tools for learning 2022 — from toptools4learning.com by Jane Hart

Excerpt:

In fact, it has become clear that whilst 2021 was the year of experimentation – with an explosion of tools being used as people tried out new things, 2022 has been the year of consolidation – with people reverting to their trusty old favourites. In fact, many of the tools that were knocked off their perches in 2021, have now recovered their lost ground this year.


Also somewhat relevant/see:


 

15 technical skills employers look for in 2022 — from wikijob.co.uk by Nikki Dale

Excerpts:

A technical skill is the ability to carry out a task associated with technical roles such as IT, engineering, mechanics, science or finance. A typical technical skill set might include programming, the analysis of complex figures or the use of specific tools.

Technical skills are sometimes referred to as ‘hard skills’ because you can learn how to do them and, in some cases, get qualified or at least certified.

Some technical skills employers are looking for include:

 
 

Matthew Ball on the metaverse: We’ve never seen a shift this enormous — protocol.com by Janko Roettgers
The leading metaverse theorist shares his thoughts on the sudden rise of the concept, its utility for the enterprise and what we still get wrong about the metaverse.

Excerpts:

What are the biggest misconceptions about the metaverse?
First, the idea that the metaverse is immersive virtual reality, such as an Oculus or Meta Quest. That’s an access device. It would be akin to saying the mobile internet is a smartphone.

We should think of the metaverse as perhaps changing the devices we use, the experiences, business models, protocols and behaviors that we enjoy online. But we’ll keep using smartphones, keyboards. We don’t need to do all video conferences or all calls in 3D. It’s supplements and complements, doesn’t replace everything.

Also relevant/see:

A former Amazon exec thinks Disney will win the metaverse — from protocol.com by

Excerpt:

This month, Ball is publishing his book, “The Metaverse: And How It Will Revolutionize Everything.” The work explains in detail what the metaverse is all about and which shifts in tech, business and culture need to fall into place for it to come into existence.

How will the metaverse change Hollywood? In his book, Ball argues that people tend to underestimate the changes new technologies will have on media and entertainment.

  • Instead of just seeing a movie play out in 360 degrees around us, we’ll want to be part of the movie and play a more active role.
  • One way to achieve that is through games, which have long blurred the lines between storytelling and interactivity. But Ball also predicts there will be a wide range of adjacent content experiences, from virtual Tinder dates in the “Star Wars” universe to Peloton rides through your favorite movie sets.

Addendum on 7/24/22:

Neurodiversity, Inclusion And The Metaverse — from workdesign.com by Derek McCallum

Excerpt:

Innovation in virtual and augmented reality platforms and the vast opportunities connected to the metaverse are driving innovation in nearly every industry. In the workplace, future-focused companies are increasingly exploring ways to use this nascent technology to offer workers more choices and better support for neurodiverse employees.

It would be nearly impossible to list all the challenges and opportunities associated with this technology in a single article, so I’ll keep things focused on an area that is top-of-mind right now as many of us start to make our way back into the office—the workplace. The truth is, while we can use our expertise and experience to anticipate outcomes, no one truly knows what the metaverse will become and what the wide-ranging effects will be. At the moment, the possibilities are exciting and bring to mind more questions than answers. As a principal and hands-on designer in a large, diverse practice, my hope is that we will be able to collectively harness the inherent opportunities of the metaverse to support richer, more accessible human experiences across all aspects of the built environment, and that includes the workplace.


 

Set Against a Backdrop of World Events, Tim Okamura’s Bold Portraits Emanate Commanding Energy — from thisiscolossal.com by Grace Ebert and Tim Okamura

Fire Fighter” (2021)

“Fire Fighter” (2021), oil on canvas, 60 x 76 inches


A Stunning Double Rainbow Frames a Lightning Bolt as It Strikes the Mountainous Virginia Horizon — from thisiscolossal.com

 

‘Hologram patients’ and mixed reality headsets help train UK medical students in world first — from uk.news.yahoo.com

Excerpts:

Medical students in Cambridge, England are experiencing a new way of “hands-on learning” – featuring the use of holographic patients.

Through a mixed reality training system called HoloScenarios, students at Addenbrooke’s Hospital, part of the Cambridge University Hospitals NHS Foundation Trust, are now being trained via immersive holographic patient scenarios in a world first.

The new technology is aimed at providing a more affordable alternative to traditional immersive medical simulation training involving patient actors, which can demand a lot of resources.

Developers also hope the technology will help improve access to medical training worldwide.

 
© 2024 | Daniel Christian