Understanding the Overlap Between UDL and Digital Accessibility — from boia.org

Excerpt:

Implementing UDL with a Focus on Accessibility
UDL is a proven methodology that benefits all students, but when instructors embrace universal design, they need to consider how their decisions will affect students with disabilities.

Some key considerations to keep in mind:

  • Instructional materials should not require a certain type of sensory perception.
  • A presentation that includes images should have accurate alternative text (also called alt text) for those images.
  • Transcripts and captions should be provided for all audio content.
  • Color alone should not be used to convey information, since some students may not perceive color (or have different cultural understandings of colors).
  • Student presentations should also follow accessibility guidelines. This increases the student’s workload, but it’s an excellent opportunity to teach the importance of accessibility.
 

Resources for Computer Science Education Week (December 5-11, 2022) — with thanks to Mark Adams for these resources

Per Mark, here are a few resources that are intended to show students how computers can become part of their outside interests as well as in their future careers.

Educating Engineers

Maryville University

Fullstack Academy

Also see:

Computer Science Education Week is December 5-11, 2022

 

MADE Podcast on Branching Scenarios — from christytuckerlearning.com by Christy Tucker
My interview for the MADE podcast on branching scenarios: when to use them, challenges, tools, planning, and getting started.

Excerpt:

MADE is the Media and Design in Education team for the University of Toronto. Inga Breede from this educational technology group recently interviewed me for their podcast. We talked about scenario-based learning and specifically about branching scenarios.

What we discussed
We covered several topics in our 20-minute conversation.

  • When should branching scenarios be used in learning experiences?
  • What are some of the challenges and limitations that designers typically come across when they’re building a branching scenario?
  • What are the key components to consider in the planning stage?
  • What are my favorite tools to use to build branching scenarios?
  • For learning designers who are interested in scenario building, where can they begin their journey of discovery?
 

TL;DR: Women prefer text contributions over talk in remote classes — from highereddive.com by Laura Spitalniak (BTW, TL;DR: is short for “too long; didn’t read”)

Dive Brief (emphasis DSC):

  • Female students show a stronger preference for contributing to remote classes via text chat than their male counterparts, according to peer-reviewed research published in PLOS One, an open-access journal.
  • Researchers also found all students were more likely to use the chat function to support or amplify their peers’ comments than to diminish them.
  • Given these findings, the researchers suggested incorporating text chats into class discussions could boost female participation in large introductory science classrooms, where women are less likely to participate than men.
 

10 Must Read Books for Learning Designers — from linkedin.com by Amit Garg

Excerpt:

From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment

Speaking of recommended books (but from a more technical perspective this time), also see:

10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team)
Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders

10 must-read tech books for 2023 -- from enterprisersproject.com by Katie Sanders

 

This Copyright Lawsuit Could Shape the Future of Generative AI — from wired.com by Will Knight
Algorithms that create art, text, and code are spreading fast—but legal challenges could throw a wrench in the works.

Excerpts:

A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argues that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.

Programmers have, of course, always studied, learned from, and copied each other’s code. But not everyone is sure it is fair for AI to do the same, especially if AI can then churn out tons of valuable code itself, without respecting the source material’s license requirements. “As a technologist, I’m a huge fan of AI ,” Butterick says. “I’m looking forward to all the possibilities of these tools. But they have to be fair to everybody.”

Whatever the outcome of the Copilot case, Villa says it could shape the destiny of other areas of generative AI. If the outcome of the Copilot case hinges on how similar AI-generated code is to its training material, there could be implications for systems that reproduce images or music that matches the style of material in their training data. 

Also legal-related, see:


Also related to AI and art/creativity from Wired.com, see:


 
 

How AI will change Education: Part I | Transcend Newsletter #59 — from transcend.substack.com by Alberto Arenaza; with thanks to GSV’s Big 10 for this resource

Excerpt:

You’ve likely been reading for the last few minutes my arguments for why AI is going to change education. You may agree with some points, disagree with others…

Only, those were not my words.

An AI has written every single word in this essay up until here.

The only thing I wrote myself was the first sentence: Artificial Intelligence is going to revolutionize education. The images too, everything was generated by AI.

 

Using Virtual Reality for Career Training — from techlearning.com by Erik Ofgang
The Boys & Girls Clubs of Indiana have had success using virtual reality simulations to teach students about career opportunities.

a Woman with a virtual reality set on occupies one half of the screen. The other shows virtual tools that she is controlling.

Excerpts:

Virtual reality can help boost CTE programs and teach students about potential careers in fields they may know nothing about, says Lana Taylor from the Indiana Alliance of Boys & Girls Clubs of America.

One of those other resources has been a partnership with Transfer VR to provide students access to headsets to participate in career simulations that can give them a tactile sense of what working in certain careers might be like.

“Not all kids are meant to go to college, not all kids want to do it,” Taylor says. “So it’s important to give them some exposure to different careers and workforce paths that maybe they hadn’t thought of before.” 


AI interviews in VR prepare students for real jobseeking — from inavateonthenet.net

 

Get Ready to Relearn How to Use the Internet — from bloomberg.com by Tyle Cowen; with thanks to Sam DeBrule for this resource
Everyone knows that an AI revolution is coming, but no one seems to realize how profoundly it will change their day-to-day life.

Excerpts:

This year has brought a lot of innovation in artificial intelligence, which I have tried to keep up with, but too many people still do not appreciate the import of what is to come. I commonly hear comments such as, “Those are cool images, graphic designers will work with that,” or, “GPT-3 is cool, it will be easier to cheat on term papers.” And then they end by saying: “But it won’t change my life.”

This view is likely to be proven wrong — and soon, as AI is about to revolutionize our entire information architecture. You will have to learn how to use the internet all over again.

Change is coming. Consider Twitter, which I use each morning to gather information about the world. Less than two years from now, maybe I will speak into my computer, outline my topics of interest, and somebody’s version of AI will spit back to me a kind of Twitter remix, in a readable format and tailored to my needs.

The AI also will be not only responsive but active. Maybe it will tell me, “Today you really do need to read about Russia and changes in the UK government.” Or I might say, “More serendipity today, please,” and that wish would be granted.

Of course all this is just one man’s opinion. If you disagree, in a few years you will be able to ask the new AI engines what they think.

Some other recent items from Sam DeBrule include:

Natural Language Assessment: A New Framework to Promote Education — from ai.googleblog.com by Kedem Snir and Gal Elidan

Excerpt:

In this blog, we introduce an important natural language understanding (NLU) capability called Natural Language Assessment (NLA), and discuss how it can be helpful in the context of education. While typical NLU tasks focus on the user’s intent, NLA allows for the assessment of an answer from multiple perspectives. In situations where a user wants to know how good their answer is, NLA can offer an analysis of how close the answer is to what is expected. In situations where there may not be a “correct” answer, NLA can offer subtle insights that include topicality, relevance, verbosity, and beyond. We formulate the scope of NLA, present a practical model for carrying out topicality NLA, and showcase how NLA has been used to help job seekers practice answering interview questions with Google’s new interview prep tool, Interview Warmup.

How AI could help translate extreme weather alerts — from axios.com by Ayurella Horn-Muller

Excerpt:

A startup that provides AI-powered translation is working with the National Weather Service to improve language translations of extreme weather alerts across the U.S.

Using GPT-3 to augment human intelligence — from escapingflatland.substack.com by Henrik Karlsson

Excerpt:

When I’ve been doing this with GPT-3, a 175 billion parameter language model, it has been uncanny how much it reminds me of blogging. When I’m writing this, from March through August 2022, large language models are not yet as good at responding to my prompts as the readers of my blog. But their capacity is improving fast and the prices are dropping.

Soon everyone can have an alien intelligence in their inbox.

 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

3D Scanner Lets You Capture The Real World In VR — from vrscout.com by Kyle Melnick

Excerpt:

VR is about to get a whole lot more real.

Imagine having the power to capture your real-world environment as a hyper-realistic 3D model from the palm of your hand. Well, wonder no more, as peel 3d, a developer of professional-grade 3D scanners, today announced the launch of peel 3 and peel 3.CAD, two new easy-to-use 3D scanners capable of generating high-quality 3D scans for a wide variety of digital mediums, including VR and augmented reality (AR).

 

NASA & Google Partner To Create An AR Solar System — from vrscout.com by Kyle Melnick

Excerpt:

[On 9/14/22], Google Arts & Culture announced that is has partnered with NASA to further extend its virtual offerings with a new online exhibit featuring a collection of new-and-improved 3D models of our universe brought to life using AR technology.

These 3D models are for more than just entertainment, however. The virtual solar system exhibit features historical annotations that, when selected, display valuable information. Earth’s moon, for example, features landing sites for Apollo 11 and China’s Chang’e-4.

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

From DSC:
I need to learn a lot more about the benefits and the threats/downsides of blockchain-based technologies. Here are two different takes on whether blockchain should be implemented or not — though the second one may be a prime example of the first article (a scam, hyping a tech up for investment purposes, other):

1) ‘Blockchain is bunk’: Crypto critics find their voice — from protocol.com by Benjamin Pimentel
John Stark, founding chief of the SEC’s Office of Internet Enforcement, is joining other experts in a major gathering of crypto skeptics.

Excerpts:

More than 20 years later, Stark is speaking out against what he considers a new wave of fraud. But this time he’s also taking aim at the technology that he says the scammers are using: cryptocurrencies and blockchain.

There are so many aspects to it, whether you’re talking about bitcoin and the greater fool theory, or the externalities of ransomware and drug dealing and human sex trafficking, or the financial systemic risk created by cryptocurrency or the real bluster, hype and nonsensical belief in blockchain. There’s so many reasons to be skeptical of cryptocurrency.

Seven or eight years ago, I was willing to entertain the thought that this might be something someday. But I’m just done with that. Because there came a point in my research, my writing and my experience, where I just felt like it’s really shameless.

From my perspective, I think the magnificence of this conference is that it’s the first in history to really present these experts who are going to come together for the first time in a way that presents every angle. Because it’s a multifaceted situation. There are hundreds of cryptocurrency conferences, and they are all these lovefests where everyone just sits around and talks about how great it is, because they’re all getting rich from it.

I don’t mean to sound cynical, but that’s the truth. That’s the reality. So it’s a bit of an antidote for that illness, which plagues the space right now.

 

2) The Biggest Change to our Financial System in 50 Years is Happening in November… — from medium.com by Richard Knight
International Payments are moving to the blockchain (ISO 20022)

Excerpt:

Many cryptocurrency investors are looking to reap massive returns as the 50-year-old international payments system moves onto the blockchain beginning in November 2022.

This is part of what is known as ISO 20022, a single standardization approach to be used by all financial standards initiatives. The new standardization is set to officially begin in November 2022 and be fully implemented by November 2025.

There are many cryptocurrencies that will be integrated into this new financial system, referred to as ISO 20022 compliment cryptocurrencies and there is much speculation these cryptocurrencies will soar in price once the standard is implemented.

 


Also relevant/see:


 
© 2022 | Daniel Christian