Understanding the Overlap Between UDL and Digital Accessibility — from boia.org

Excerpt:

Implementing UDL with a Focus on Accessibility
UDL is a proven methodology that benefits all students, but when instructors embrace universal design, they need to consider how their decisions will affect students with disabilities.

Some key considerations to keep in mind:

  • Instructional materials should not require a certain type of sensory perception.
  • A presentation that includes images should have accurate alternative text (also called alt text) for those images.
  • Transcripts and captions should be provided for all audio content.
  • Color alone should not be used to convey information, since some students may not perceive color (or have different cultural understandings of colors).
  • Student presentations should also follow accessibility guidelines. This increases the student’s workload, but it’s an excellent opportunity to teach the importance of accessibility.
 

From DSC:
I received an email the other day re: a TytoCare Exam Kit. It said (with some emphasis added by me):

With a TytoCare Exam Kit connected to Spectrum Health’s 24/7 Virtual Urgent Care, you and your family can have peace of mind and a quick, accurate diagnosis and treatment plan whenever you need it without having to leave your home.

Your TytoCare Exam Kit will allow your provider to listen to your lungs, look inside your ears or throat, check your temperature, and more during a virtual visit.

Why TytoCare?

    • Convenience – With a TytoCare Exam Kit and our 24/7/365 On-Demand Virtual Urgent Care there is no drive, no waiting room, no waiting for an appointment.
    • Peace of Mind – Stop debating about whether symptoms are serious enough to do something about them.
    • Savings – Without the cost of gas or taking off work, you get the reliable exams and diagnosis you need. With a Virtual Urgent Care visit you’ll never pay more than $50. That’s cheaper than an in-person urgent care visit, but the same level of care.

From DSC:
It made me reflect on what #telehealth has morphed into these days. Then it made me wonder (again), what #telelegal might become in the next few years…? Hmmm. I hope the legal field can learn from the healthcare industry. It could likely bring more access to justice (#A2J), increased productivity (for several of the parties involved), as well as convenience, peace of mind, and cost savings.


 

 

Your iPhone Has 26 New Accessibility Tools You Shouldn’t Ignore — from ios.gadgethacks.com by Jovana Naumovski

Excerpt (emphasis DSC):

Magnifier has a new Door Detection option on iOS 16, which helps blind and low-vision users locate entryways when they arrive at their destination. The tool can tell you how far away the door is, if the door is open or closed, how to open it (push it, turn the knob, pull the handle, etc.), what any signs say (like room numbers), what any symbols mean (like people icons for restrooms), and more.

From DSC:
By the way, this kind of feature would be great to work in tandem with devices such as the Double Robotics Telepresence Robot — i.e., using Machine-to-Machine (M2M) communications to let the robot and automatic doors communicate with each other so that remote students can “get around on campus.”

 

It would be great to have M2M communications with mobile robots to get through doors and to open elevator doors as well

 


Along the lines of accessibility-related items, also relevant/see:

Microsoft introduces sign language for Teams — from inavateonthenet.net

Excerpt:

Microsoft has announced a sign language view for Teams to help signers and others who use sign language. The information on screen will be prioritised on centre stage, in a consistent location, throughout every meeting.

When sign language view is enabled, the prioritised video streams automatically appear at the right aspect ratio and at the highest available quality. Like pinning and captioning, sign language view is personal to each user and will not impact what others see in the meeting.


 

 
 

Virtual or in-person: The next generation of trial lawyers must be prepared for anything — from reuters.com by Stratton Horres and Karen L. Bashor

A view of the jury box (front), where jurors would sit in and look towards the judge's chair (C), the witness stand (R) and stenographer's desk (L) in court room 422 of the New York Supreme Court

Excerpt:

In this article, we will examine several key ways in which COVID-19 has changed trial proceedings, strategy and preparation and how mentoring programs can make a difference.

COVID-19 has shaken up the jury trial experience for both new and experienced attorneys. For those whose only trials have been conducted during COVID-19 restrictions and for everyone easing back into the in-person trials, these are key elements to keep in mind practicing forward. Firm mentoring programs should be considered to prepare the future generation of trial lawyers for both live and virtual trials.

From DSC:
I think law firms will need to expand the number of disciplines coming to their strategic tables. That is, as more disciplines are required to successfully practice law in the 21st century, more folks with technical backgrounds and/or abilities will be needed. Web front and back end developers, User Experience Designers, Instructional Designers, Audio/Visual Specialists, and others come to my mind. Such people can help develop the necessary spaces, skills, training, and mentoring programs mentioned in this article. As within our learning ecosystems, the efficient and powerful use of teams of specialists will deliver the best products and services.

 

How Long Should a Branching Scenario Be?— from christytuckerlearning.com by Christy Tucker
How long should a branching scenario be? Is 45 minutes too long? Is there an ideal length for a branching scenario?

Excerpt:

Most of the time, the branching scenarios and simulations I build are around 10 minutes long. Overall, I usually end up at 5-15 minutes for branching scenarios, with interactive video scenarios being at the longer end.

From DSC:
This makes sense to me, as (up to) 6 minutes turned out to be an ideal length for videos.

Excerpt from Optimal Video Length for Student Engagement — from blog.edx.org

The optimal video length is 6 minutes or shorter — students watched most of the way through these short videos. In fact, the average engagement time of any video maxes out at 6 minutes, regardless of its length. And engagement times decrease as videos lengthen: For instance, on average students spent around 3 minutes on videos that are longer than 12 minutes, which means that they engaged with less than a quarter of the content. Finally, certificate-earning students engaged more with videos, presumably because they had greater motivation to learn the material. (These findings appeared in a recent Wall Street Journal article, An Early Report Card on Massive Open Online Courses and its accompanying infographic.)

The take-home message for instructors is that, to maximize student engagement, they should work with instructional designers and video producers to break up their lectures into small, bite-sized pieces.

 

How Older Adults Access Resources Online — from blog.getsetup.io

Top Insights:

  • It’s clear that how older adults are using technology has changed. COVID has seen more and more older adults using a wide range of devices which means there is no one-size-fits-all approach to this audience.
  • In the United States, desktop devices are still the most common form of media consumption for virtual learning and health.
  • But, mobile devices are still the dominant device for passive content consumption.
  • Consumption by different US states varies based on the quality of internet infrastructure and availability of newer devices.
  • In India and Australiamobile devices outperform desktops for virtual learning.
  • Developing browser-first solutions for engagement is key to reaching a wider audience.
  • Applications and websites that aim to make the user experience as seamless as possible across multiple devices have a greater chance of being used and picked up more effectively by older adults of a variety of ages.
  • The variations in device types make it very challenging to build LIVE streaming technology that can scale across platforms.
  • Chrome is a dominant browser with the 55+ group allowing sophisticated video streaming applications to be built that was not possible over Internet Explorer.
  • While Zoom became the de facto standard for video-based sessions, older adult learners were 11x more likely to attend class in our browser Lounge than enter the Zoom classes.

Also relevant/see:

 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Coding Isn’t a Necessary Leadership Skill — But Digital Literacy Is — from hbr.org by Sophia Matveeva

Summary (emphasis DSC):

While most leaders now know that tech is a vital part of business, many are wondering what they really need to know about technology to succeed in the digital age. Coding bootcamps may appeal to some, but for many leaders, learning to code is simply not the best investment. It takes a long time to become a proficient coder, and it still doesn’t give you a holistic overview of how digital technologies get made. The good news is that most leaders don’t need to learn to code. Instead, they need to learn how to work with people who code. This means becoming a digital collaborator and learning how to work with developers, data scientists, user experience designers, and product managers — not completely retraining. The author presents four ways for non-technical leaders to become digital collaborators.

 

‘Hologram patients’ and mixed reality headsets help train UK medical students in world first — from uk.news.yahoo.com

Excerpts:

Medical students in Cambridge, England are experiencing a new way of “hands-on learning” – featuring the use of holographic patients.

Through a mixed reality training system called HoloScenarios, students at Addenbrooke’s Hospital, part of the Cambridge University Hospitals NHS Foundation Trust, are now being trained via immersive holographic patient scenarios in a world first.

The new technology is aimed at providing a more affordable alternative to traditional immersive medical simulation training involving patient actors, which can demand a lot of resources.

Developers also hope the technology will help improve access to medical training worldwide.

 

Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’ — from protocol.com by Kyle Alspach
Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.

Excerpt:

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn’t been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Also re: AI, see:

Nuance partners with The Academy to launch The AI Collaborative — from artificialintelligence-news.com by Ryan Daws

Excerpt:

Nuance has partnered with The Health Management Academy (The Academy) to launch The AI Collaborative, an industry group focused on advancing healthcare using artificial intelligence and machine learning.

Nuance became a household name for creating the speech engine recognition engine behind Siri. In recent years, the company has put a strong focus on AI solutions for healthcare and is now a full-service partner of 77 percent of US hospitals and is trusted by over 500,000 physicians daily.

Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions — from techcrunch.com by Kyle Wiggers

Excerpts:

Inflection AI, the machine learning startup headed by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has secured $225 million in equity financing, according to a filing with the U.S. Securities and Exchange Commission.

“[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like we’re on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.”

 

Inside Microsoft’s new Inclusive Tech Lab — from engadget.com by C. Low; with thanks to Nick Floro on Twitter for some of these resources
“An embassy for people with disabilities.”

Increasing our Focus on Inclusive Technology — from mblogs.microsoft.com by Dave Dame

Excerpt:

In recent years, tied to Microsoft’s mission of empowering every person and organization on the planet to achieve more, teams from across Microsoft have launched several products and features to make technology more inclusive and accessible. [On May 10, 2022], as part of the 12th annual Microsoft Ability Summit, we celebrate a new and expanded Inclusive Tech Lab, powerful new software features, and are unveiling Microsoft adaptive accessories designed to give people with disabilities greater access to technology.

Microsoft’s Latest Hardware Is More Accessible and Customizable — from wired.com by Brenda Stolyar
The wireless system—a mouse, a button, and a hub—is designed to increase productivity for those with limited mobility.

Excerpt:

Microsoft if expanding its lineup of accessibility hardware. During its annual Ability Summit—an event dedicated to disability inclusion and accessibility—the company showed attendees some new PC hardware it has developed for users with limited mobility. Available later this year, the wireless system will consist of an adaptive mouse, a programmable button, and a hub to handle the connection to a Windows PC. Users set up the devices to trigger various keystrokes, shortcuts, and sequences. These new input devices can be used with existing accessories, and they can be further customized with 3D-printed add-ons. There are no price details yet.

Along these lines, also see:

  • 14 Equity Considerations for Ed Tech — from campustechnology.com by Reed Dickson
    Is the education technology in your online course equitable and inclusive of all learners? Here are key equity questions to ask when considering the pedagogical experience of an e-learning tool.
 

What Educators Need to Know About Assistive Tech Tools: Q&A with Texthelp CEO — from thejournal.com by Kristal Kuykendall and Texthelp CEO Martin McKay

Excerpts (emphasis DSC):

THE Journal: What are some examples of the types of assistive technology tools now available for K–12 schools?
McKay: There are a broad range of disabilities, and accordingly, a broad range of learning and access difficulties that assistive technology can help with. Just considering students with dyslexia — since that is the largest group among students who can benefit from assistive tech tools — the main problems they have are around reading comprehension and writing. Assistive technology can provide text-to-speech, talking dictionaries, picture dictionaries, and text simplification tools to help with comprehension.

It’s important that these tools need to work everywhere — not just in their word processor. Assistive technology must work in their learning management systems, and must work in their online assessment environment, so that the student can use the assistive tech tools not only in class, but at home as they work on their homework, and perhaps most importantly on test day when they are using a secure assessment environment.

 

We need to use more tools — that go beyond screen sharing — where we can collaborate regardless of where we’re at. [Christian]

From DSC:
Seeing the functionality in Freehand — it makes me once again think that we need to use more tools where faculty/staff/students can collaborate with each other REGARDLESS of where they’re coming in to partake in a learning experience (i.e., remotely or physically/locally). This is also true for trainers and employees, teachers and students, as well as in virtual tutoring types of situations. We need tools that offer functionalities that go beyond screen sharing in order to collaborate, design, present, discuss, and create things.  (more…)

 
© 2022 | Daniel Christian