Many students complain that online-based learning doesn’t engage them. Well, check this idea out! [Christian]


From DSC…by the way, another title for this blog could have been:

WIN-WIN situations all around! The Theatre Departments out there could collaborate with other depts/disciplines to develop highly engaging, digitally-based learning experiences! 


The future of drama and the theatre — as well as opera, symphonies, and more — will likely include a significant virtual/digital component to them. While it’s too early to say that theatre needs to completely reinvent itself and move “the stage” completely online, below is an idea that creates a variety of WIN-WIN situations for actors, actresses, stage designers, digital audio/video editors, fine artists, graphic designers, programmers, writers, journalists, web designers, and many others as well — including the relevant faculty members!

A new world of creative, engaging, active learning could open up if those involved with the Theatre Department could work collaboratively with students/faculty members from other disciplines. And in the end, the learning experiences and content developed would be highly engaging — and perhaps even profitable for the institutions themselves!

A WIN-WIN situation all around! The Theatre Department could collaborate with other depts/disciplines to develop highly engaging learning experiences!

[DC: I only slightly edited the above image from the Theatre Department at WMU]

 

Though the integration of acting with online-based learning materials is not a new idea, this post encourages a far more significant interdisciplinary collaboration between the Theatre Department and other departments/disciplines.

Consider a “Dealing with Bias in Journalism” type of topic, per a class in the Digital Media and Journalism Major.

  • Students from the Theatre Department work collaboratively with the students from the most appropriate class(es?) from the Communications Department to write the script, as per the faculty members’ 30,000-foot instructions (not 1000-foot level/detailed instructions)
  • Writing the script would entail skills involved with research, collaboration, persuasion, creativity, communication, writing, and more
  • The Theatre students would ultimately act out the script — backed up by those learning about sound design, stage design, lighting design, costume design, etc.
  • Example scene: A woman is sitting around the kitchen table, eating breakfast and reading a posting — aloud — from a website that includes some serious bias in it that offends the reader. She threatens to cancel her subscription, contact the editor, and more. She calls out to her partner why she’s so mad about the article. 
  • Perhaps there could be two or more before/after scenes, given some changes in the way the article was written.
  • Once the scenes were shot, the digital video editors, programmers, web designers, and more could take that material and work with the faculty members to integrate those materials into an engaging, interactive, branching type of learning experience. 
  • From there, the finished product would be deployed by the relevant faculty members.

Scenes from WMU's Theatre Department

[DC: Above images from the Theatre Department at WMU]

 

Colleges and universities could share content with each other and/or charge others for their products/content/learning experiences. In the future, I could easily see a marketplace for buying and selling such engaging content. This could create a needed new source of revenue — especially given that those large auditoriums and theaters are likely not bringing in as much revenue as they typically do. 

Colleges and universities could also try to reach out to local acting groups to get them involved and continue to create feeders into the world of work.

Other tags/categories could include:

  • MOOCs
  • Learning from the Living[Class]Room
  • Multimedia / digital literacy — tools from Adobe, Apple, and others.
  • Passions, participation, engagement, attention.
  • XR: Creating immersive, Virtual Reality (VR)-based experiences
  • Learning Experience Design
  • Interaction Design
  • Interface Design
  • …and more

Also see:

What improv taught me about failure: As a teacher and academic — from scholarlyteacher.com by Katharine Hubbard

what improv taught me about failure -as a teacher and academic

In improv, the only way to “fail” is to overthink and not have fun, which reframed what failure was on a grand scale and made me start looking at academia through the same lens. What I learned about failure through improv comes back to those same two core concepts: have fun and stop overthinking.

Students are more engaged when the professor is having fun with the materials (Keller, Hoy, Goetz, & Frenzel, 2016), and teaching is more enjoyable when we are having fun ourselves.

 

From edsurge.com today:

THOROUGHLY MODERN MEDIA: This spring, a college theater course about women’s voting rights aimed to produce a new play about the suffrage struggle. When the pandemic scuttled those plans, professors devised a new way to share suffragist stories by creating an interactive, online performance set in a virtual Victorian mansion. And their students were not the only ones exploring women’s voting rights as the country marks the 100th anniversary of the Nineteenth Amendment.

…which linked to:

The Pandemic Made Their Women’s Suffrage Play Impossible. But the Show Went on— Virtually — from edsurge.com by Rebecca Koenig

Excerpts:

Then the pandemic hit. Students left Radford and Virginia Tech. Live theater was canceled.

But the class wasn’t.

“Neither of us ever said, ‘Forget it,’” Hood says. “Our students, they all wanted to know, ‘What are we doing?’ We came to them with this insane idea.”

They would create an interactive, online production staged in a virtual Victorian mansion.

“Stage performance is different than film or audio. If you just have audio, you only have your voice. Clarity, landing sentences, really paying attention to the structure of a sentence, becomes important,” Nelson says. “Students got a broader sense of the skills and approaches to different mediums—a crash course.”

 

From DSC:
Talk about opportunities for interdisciplinary learning/projects!!!  Playwrights, directors, actors/actresses, set designers, graphic designers, fine artists, web designers and developers, interactivity/interface designers, audio designers, video editors, 3D animators, and more!!!

 

The performance website, “Women and the Vote,” premiered on May 18, 2020

 

Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 

This magical interface will let you copy and paste the real world into your computer — from fastcompany.com by Mark Wilson
Wowza.

Excerpt:

But a new smartphone and desktop app coming from Cyril Diagne, an artist-in-residence at Google, greatly simplifies the process. Called AR Copy Paste, it can literally take a photo of a plant, mug, person, newspaper—whatever—on your phone, then, through a magical bit of UX, drop that object right onto your canvas in Photoshop on your main computer.

“It’s not so hard to open the scope of its potential applications. First of all, it’s not limited to objects, and it works equally well with printed materials like books, old photo albums, and brochures,” says Diagne. “Likewise, the principle is not limited to Photoshop but can be applied to any image, document, or video-editing software.”

 

 

https://arcopypaste.app/

 
 

10 ways COVID-19 could change office design — from weforum.org by Harry Kretchmer

“Think road markings, but for offices. From squash-court-style lines in lobbies to standing spots in lifts, and from circles around desks to lanes in corridors, the floors and walls of our offices are likely to be covered in visual instructions.”

 

From DSC:
After reading the above article and quote, I wondered..rather than marking up the floors and walls of spaces, perhaps Augmented Reality (AR) will provide such visual instructions for navigating office spaces in the future.  Then I wondered about other such spaces as:

  • Learning spaces
  • Gyms
  • Retail outlets 
  • Grocery stores
  • Restaurants
  • Small businesses
  • Other common/public spaces
 

Concept3D introduces wheelchair wayfinding feature to support campus accessibility — from concept3d.echoscomm.com with thanks to Delaney Lanker for this resource
System makes wheelchair friendly campus routes easy to find and follow

Excerpt:

Concept3D, a leader in creating immersive online experiences with 3D modeling, interactive maps and virtual tour software, today announced the launch of a new wheelchair wayfinding feature that adds a new level of accessibility to the company’s interactive map and tour platform.

With the new wheelchair accessible route functionality, Concept3D clients are able to offer a separate set of wayfinding routes specifically designed to identify the most efficient and easiest routes.

Concept3D’s wayfinding system uses a weighted algorithm to determine the most efficient route between start and end points, and the new system was enhanced to factor in routing variables like stairs, curb cuts, steep inclines, and other areas that may impact accessibility.

Also see:

Wayfinding :: Wheelchair Accessible Routes — from concept3d.com

https://www.concept3d.com/blog/higher-ed/wayfinding-wheelchair-accessible-routes

 

Below are some thoughts from Michal Borkowski, CEO and Co-Founder of Brainly, regarding some emerging edtech-related trends for 2020.

2020 is coming at us fast, and it’s bringing a haul of exciting EdTech trends along with it. A new decade means new learning opportunities created to cater to the individual rather than a collective hive. There are more than one or two ways of learning — by not embracing all of the ways to teach, we risk leaving students behind in subjects they may need extra help in.

Michal Borkowski, CEO and Co-Founder of Brainly– the world’s largest online learning platform with 150 million monthly users in 35 countries– has his finger on the pulse of global education trends. He was selected to speak at Disrupt Berlin, the world’s leading authority in debuting revolutionary startups and technologies, this year and has some insightful predictions on the emerging trends 2020 will bring in EdTech.

  1. Customized learning via AI
    AI systems with customizable settings will allow students to learn based on their personal strengths and weaknesses. This stylized learning takes into account that not every student absorbs information in the same way. In turn, it helps teachers understand what each individual student needs, spend more time teaching new material, and receive higher classroom results.
  2. Responsible technological integration
    Students today are more fluent in technology than older generations. Integrating tech through digital resources, textbooks, game-style lessons, and interactive learning are efficient ways to captivate students and teach them responsible usage of technology.
  3. Expansive peer-to-peer learning
    Allowing students access to a platform where they can view different student’s educational interpretations, and one specific perspective may help information click, is invaluable. These learning platforms break down barriers, encourage active learning anywhere, and cultivate a sense of community between students all over the world.
  4. From STEM to STEAM
    Science, technology, engineering, and math curriculums have been the major educational focus of the decade, but 2020 will see more integration of classical liberal arts into educational modules, turning STEM into STEAM. Incorporating the arts into a tech-based curriculum enables students to create important connections to the world and allows them to have a well-rounded education.
  5. Options in learning environments
    Who says learning has to take place in a classroom? Advancements in EdTech has provided new and exciting avenues where educators can experiment. Grade and high school level teachers are experimenting with webinars, online tutorials, and other forms of tech-based instruction to connect to students in environments where they are more inclined to learn.

2020 is the year that education forms itself around each student’s individual needs rather than leaving kids behind who don’t benefit from traditional instruction.

 

XR for Teaching and Learning — from educause

Key Findings

  • XR technologies are being used to achieve learning goals across domains.
  • Effective pedagogical uses of XR technologies fall into one of three large categories: (1) Supporting skills-based and competency-based teaching and learning, such as nursing education, where students gain practice by repeating tasks. (2) Expanding the range of activities with which a learner can gain hands-on experience—for example, by enabling the user to interact with electrons and electromagnetic fields. In this way, XR enables some subjects traditionally taught as abstract knowledge, using flat media such as illustrations or videos, to be taught as skills-based. (3) Experimenting by providing new functionality and enabling new forms of interaction. For example, by using simulations of materials or tools not easily available in the physical world, learners can explore the bounds of what is possible in both their discipline and with the XR technology itself.
  • Integration of XR into curricula faces two major challenges: time and skills.
  • The adoption of XR in teaching has two major requirements: the technology must fit into instructors’ existing practices, and the cost cannot be significantly higher than that of the alternatives already in use.
  • The effectiveness of XR technologies for achieving learning goals is influenced by several factors: fidelity, ease of use, novelty, time-on-task, and the spirit of experimentation.

XR for Teaching and Learning

 

Amazon’s new Fire TV Blaster works with Echo to control your TV, soundbar, cable box and more — from techcrunch.com by Sarah Perez

Excerpt:

Amazon already offers Alexa voice control to TV owners through its Fire TV devices, by way of a voice remote or by pairing an Echo device with a Fire TV, for hands-free voice commands. Now, it’s introducing a new device, the Fire TV Blaster, which extends that same hands-free voice control to your TV itself and other TV devices — like your soundbar, cable box, or A/V Receiver.

That means you’ll be able to say things like “Alexa, turn off the TV,” or “Alexa, switch to HDMI 1 on TV.” You can also control the volume and the playback.

And if you have other TV devices, you can control them hands-free as well, by saying things like “Alexa, turn up the soundbar volume,” or “Alexa tune to ESPN on cable.”

 

From DSC:
How might such Natural Language Processing (NLP) / voice recognition technologies impact future learning experiences and learning spaces? Will we enjoy more hands-free navigation around some cloud-based learning-related content? Will faculty members be able to use their voice to do the things that a Crestron Media Controller does today?

Parenthetically…when will we be able to speak to our routers (to shut off the kids’ internet connections/devices for example) and/or speak to our thermostats (to set the temperature(s) quickly for the remainder of the day)?

 

 

Turn Your Home Into A Mixed Reality Record Store With Spotify On Magic Leap — from vrscout.com by Kyle Melnick

“We see a time in the not too distant future when spatial computing will extend into the wider world of podcasts, audiobooks and storytelling,” continues Magic Leap in their post. “And this is just the beginning. The launch of Spotify marks an evolution in the way you can see, hear and experience the bands and artists that you love. It’s an exciting time for music. For musicians. For developers. And for music-lovers the world over.”

 

Six Examples of Augmented Reality in Performance Support — from learningsolutionsmag.com by Jeff Batt

Excerpt:

What do you think of when you hear the words “performance support”? We may all have our interpretation, but for me, performance support gives the learner instruction on how to perform actions while on the job and in the moment they need it. That has AR written all over it.

AR in performance support puts learning content in the context of what the learner is seeing. It enhances the real world, and it guides the learner through steps they need to take in the real world and even allows them to explore content they cannot easily access otherwise.

That has enormous potential for the learning and development space and gets me super excited about using AR. In this article, I’ll explore some possible scenarios for using AR in your performance support materials.

 
 

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2020 | Daniel Christian