Could AR and/or VR enable a massive 3D-based type of “Voicethread?” [Christian]

From DSC:
What if we could quickly submit items for a group to discuss, annotate, and respond to — using whichever media format is available/preferable for a person — like a massive 3D-based Voicethread? What if this type of discussion could be contributed to and accessed via Augmented Reality (AR) and/or via Virtual Reality (VR) types of devices?

It could be a new 3D format that a person could essentially blow all the way up into the size of a billboard. Think, “Honey, I shrunk the kids” type of stuff.  

Input devices might include:

  • Augmented Reality (AR) glasses
  • Virtual Reality (VR) headsets/glasses
  • Scanners
  • Smartphones
  • Tablets
  • Desktops and laptops
  • SmartTVs
  • Other types of input devices

For example, a person could take a picture of a document or something else and then save that image into a new file format that would be vector-based. I say a vector-based file format so that the image could be enlarged to the size of a billboard without losing any resolution (i.e., wouldn’t become grainy; the image would remain crystal clear regardless of how big the image is). I’m thinking here along the lines of “Honey, I shrunk the kids!”

Other thoughts here:

  • The files could be accessible online for attendees of classes or for audiences of presentations/webinars
  • The files could be displayed on the walls of learning/presentation spaces for marking them up
  • One could manipulate the 3D image if that person was using a virtual/immersive environment
  • Users should be able to annotate on those images and/or be able to save such annotations and notes

A question for phase II:
Could this concept also be used if virtual courts take off?

Hmmmm…just thinking out loud.

 

Tools for Building Branching Scenarios — from christytuckerlearning.com by Christy Tucker
When would you use Twine, Storyline, Rise, or other tools for building branching scenarios? It depends on the project and goals.

Excerpt:

When would you use Twine instead of Storyline or other tools for building branching scenarios? An attendee at one of my recent presentations asked me why I’d bother creating something in Twine rather than just storyboarding directly in Storyline, especially if I was using character images. Whether I would use Twine, Storyline, Rise, or something else depends on the project and the goals.

 

10 Best Accessibility Tools For Designers — from hongkiat.com by Hongkiat Lim

Excerpt:

Today is the world of inclusive technology – websites, apps, and tech gadgets that are made for people with different kinds of abilities and inabilities. So when you’re designing a website, you include features that make your design accessible to as many people as possible. And this is where accessibility tools come into play.

Instead of creating everything from scratch, here’s a list of cool accessibility tools for designers. From creating color combinations according to WCAG standards to adding different reading modes to your website, these tools are a must-haves for every designer. Take a look at the list to know about each tool in detail.

 

ARHT Media Inc.
Access The Power Of HoloPresence | Hologram Technology | Holographic Displays | Hologram Events

Excerpt:

ARHT Media mounted a holographic display at the event in Vancouver and had Sunlife’s executive captured and transmitted live as a hologram to the event from our Toronto studio. He was able to see the audience and interact with them in realtime as if he was attending the event and present in the room.

ARHT Media Inc. launches the Holopod | A holograph of the head of global sales appears on stage. Access The Power Of HoloPresence | Hologram Technology | Holographic Displays | Hologram Events

From DSC:

  • Will holographic displays change what we mean by web-based collaboration?
  • Will this be a part of the future learning ecosystems inside of higher education? Inside of the corporate training world? Inside the world of events and webinars?
  • How will this type of emerging technology impact our communications? Levels of engagement?
  • Will this type of thing impact telehealth? Telelegal?
  • How will this impact storytelling? Media? Drama/acting? Games?
  • Will the price come down to where online and blended learning will use this type of thing?
  • Will streams of content be offered by holographic displays?

 

 

Learning from the Living [Class] Room: Adobe — via Behance — is already doing several pieces of this vision.

From DSC:
Talk about streams of content! Whew!

Streams of content

I received an email from Adobe that was entitled, “This week on Adobe Live: Graphic Design.”  (I subscribe to their Adobe Creative Cloud.) Inside the email, I saw and clicked on the following:

Below are some of the screenshots I took of this incredible service! Wow!

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

 

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

Adobe -- via Behance -- offers some serious streams of content

 


From DSC:
So Abobe — via Behance — is already doing several pieces of the “Learning from the Living [Class] Room” vision. I knew of Behance…but I didn’t realize the magnitude of what they’ve been working on and what they’re currently delivering. Very sharp indeed!

Churches are doing this as well — one device has the presenter/preacher on it (such as a larger “TV”), while a second device is used to communicate with each other in real-time.


 

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 
 

Many students complain that online-based learning doesn’t engage them. Well, check this idea out! [Christian]


From DSC…by the way, another title for this blog could have been:

WIN-WIN situations all around! The Theatre Departments out there could collaborate with other depts/disciplines to develop highly engaging, digitally-based learning experiences! 


The future of drama and the theatre — as well as opera, symphonies, and more — will likely include a significant virtual/digital component to them. While it’s too early to say that theatre needs to completely reinvent itself and move “the stage” completely online, below is an idea that creates a variety of WIN-WIN situations for actors, actresses, stage designers, digital audio/video editors, fine artists, graphic designers, programmers, writers, journalists, web designers, and many others as well — including the relevant faculty members!

A new world of creative, engaging, active learning could open up if those involved with the Theatre Department could work collaboratively with students/faculty members from other disciplines. And in the end, the learning experiences and content developed would be highly engaging — and perhaps even profitable for the institutions themselves!

A WIN-WIN situation all around! The Theatre Department could collaborate with other depts/disciplines to develop highly engaging learning experiences!

[DC: I only slightly edited the above image from the Theatre Department at WMU]

 

Though the integration of acting with online-based learning materials is not a new idea, this post encourages a far more significant interdisciplinary collaboration between the Theatre Department and other departments/disciplines.

Consider a “Dealing with Bias in Journalism” type of topic, per a class in the Digital Media and Journalism Major.

  • Students from the Theatre Department work collaboratively with the students from the most appropriate class(es?) from the Communications Department to write the script, as per the faculty members’ 30,000-foot instructions (not 1000-foot level/detailed instructions)
  • Writing the script would entail skills involved with research, collaboration, persuasion, creativity, communication, writing, and more
  • The Theatre students would ultimately act out the script — backed up by those learning about sound design, stage design, lighting design, costume design, etc.
  • Example scene: A woman is sitting around the kitchen table, eating breakfast and reading a posting — aloud — from a website that includes some serious bias in it that offends the reader. She threatens to cancel her subscription, contact the editor, and more. She calls out to her partner why she’s so mad about the article. 
  • Perhaps there could be two or more before/after scenes, given some changes in the way the article was written.
  • Once the scenes were shot, the digital video editors, programmers, web designers, and more could take that material and work with the faculty members to integrate those materials into an engaging, interactive, branching type of learning experience. 
  • From there, the finished product would be deployed by the relevant faculty members.

Scenes from WMU's Theatre Department

[DC: Above images from the Theatre Department at WMU]

 

Colleges and universities could share content with each other and/or charge others for their products/content/learning experiences. In the future, I could easily see a marketplace for buying and selling such engaging content. This could create a needed new source of revenue — especially given that those large auditoriums and theaters are likely not bringing in as much revenue as they typically do. 

Colleges and universities could also try to reach out to local acting groups to get them involved and continue to create feeders into the world of work.

Other tags/categories could include:

  • MOOCs
  • Learning from the Living[Class]Room
  • Multimedia / digital literacy — tools from Adobe, Apple, and others.
  • Passions, participation, engagement, attention.
  • XR: Creating immersive, Virtual Reality (VR)-based experiences
  • Learning Experience Design
  • Interaction Design
  • Interface Design
  • …and more

Also see:

What improv taught me about failure: As a teacher and academic — from scholarlyteacher.com by Katharine Hubbard

what improv taught me about failure -as a teacher and academic

In improv, the only way to “fail” is to overthink and not have fun, which reframed what failure was on a grand scale and made me start looking at academia through the same lens. What I learned about failure through improv comes back to those same two core concepts: have fun and stop overthinking.

Students are more engaged when the professor is having fun with the materials (Keller, Hoy, Goetz, & Frenzel, 2016), and teaching is more enjoyable when we are having fun ourselves.

 

From edsurge.com today:

THOROUGHLY MODERN MEDIA: This spring, a college theater course about women’s voting rights aimed to produce a new play about the suffrage struggle. When the pandemic scuttled those plans, professors devised a new way to share suffragist stories by creating an interactive, online performance set in a virtual Victorian mansion. And their students were not the only ones exploring women’s voting rights as the country marks the 100th anniversary of the Nineteenth Amendment.

…which linked to:

The Pandemic Made Their Women’s Suffrage Play Impossible. But the Show Went on— Virtually — from edsurge.com by Rebecca Koenig

Excerpts:

Then the pandemic hit. Students left Radford and Virginia Tech. Live theater was canceled.

But the class wasn’t.

“Neither of us ever said, ‘Forget it,’” Hood says. “Our students, they all wanted to know, ‘What are we doing?’ We came to them with this insane idea.”

They would create an interactive, online production staged in a virtual Victorian mansion.

“Stage performance is different than film or audio. If you just have audio, you only have your voice. Clarity, landing sentences, really paying attention to the structure of a sentence, becomes important,” Nelson says. “Students got a broader sense of the skills and approaches to different mediums—a crash course.”

 

From DSC:
Talk about opportunities for interdisciplinary learning/projects!!!  Playwrights, directors, actors/actresses, set designers, graphic designers, fine artists, web designers and developers, interactivity/interface designers, audio designers, video editors, 3D animators, and more!!!

 

The performance website, “Women and the Vote,” premiered on May 18, 2020

 

Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 

This magical interface will let you copy and paste the real world into your computer — from fastcompany.com by Mark Wilson
Wowza.

Excerpt:

But a new smartphone and desktop app coming from Cyril Diagne, an artist-in-residence at Google, greatly simplifies the process. Called AR Copy Paste, it can literally take a photo of a plant, mug, person, newspaper—whatever—on your phone, then, through a magical bit of UX, drop that object right onto your canvas in Photoshop on your main computer.

“It’s not so hard to open the scope of its potential applications. First of all, it’s not limited to objects, and it works equally well with printed materials like books, old photo albums, and brochures,” says Diagne. “Likewise, the principle is not limited to Photoshop but can be applied to any image, document, or video-editing software.”

 

 

https://arcopypaste.app/

 
 

10 ways COVID-19 could change office design — from weforum.org by Harry Kretchmer

“Think road markings, but for offices. From squash-court-style lines in lobbies to standing spots in lifts, and from circles around desks to lanes in corridors, the floors and walls of our offices are likely to be covered in visual instructions.”

 

From DSC:
After reading the above article and quote, I wondered..rather than marking up the floors and walls of spaces, perhaps Augmented Reality (AR) will provide such visual instructions for navigating office spaces in the future.  Then I wondered about other such spaces as:

  • Learning spaces
  • Gyms
  • Retail outlets 
  • Grocery stores
  • Restaurants
  • Small businesses
  • Other common/public spaces
 

Concept3D introduces wheelchair wayfinding feature to support campus accessibility — from concept3d.echoscomm.com with thanks to Delaney Lanker for this resource
System makes wheelchair friendly campus routes easy to find and follow

Excerpt:

Concept3D, a leader in creating immersive online experiences with 3D modeling, interactive maps and virtual tour software, today announced the launch of a new wheelchair wayfinding feature that adds a new level of accessibility to the company’s interactive map and tour platform.

With the new wheelchair accessible route functionality, Concept3D clients are able to offer a separate set of wayfinding routes specifically designed to identify the most efficient and easiest routes.

Concept3D’s wayfinding system uses a weighted algorithm to determine the most efficient route between start and end points, and the new system was enhanced to factor in routing variables like stairs, curb cuts, steep inclines, and other areas that may impact accessibility.

Also see:

Wayfinding :: Wheelchair Accessible Routes — from concept3d.com

https://www.concept3d.com/blog/higher-ed/wayfinding-wheelchair-accessible-routes

 
© 2021 | Daniel Christian