Reimagining the Future of Accessible Education with AI (Part I) — from blogs.microsoft.com by Heather Dowdy

Reimagining the Future of Accessible Education with AI (Part 2) — from blogs.microsoft.com by Heather Dowdy
[During Feb 2021], the Microsoft AI for Accessibility program [called] for project proposals that advance AI-powered innovations in education that will empower people with disabilities. Through a two-part series, we are highlighting projects we are supporting.

And an excerpt from Brad Smith’s (4/28/21) posting:

That’s why today we’re announcing the next phase of our accessibility journey, a new technology-led five-year commitment to create and open doors to bigger opportunities for people with disabilities. This new initiative will bring together every corner of Microsoft’s business with a focus on three priorities: Spurring the development of more accessible technology across our industry and the economy; using this technology to create opportunities for more people with disabilities to enter the workforce; and building a workplace that is more inclusive for people with disabilities.

 

Shhhh, they’re listening: Inside the coming voice-profiling revolution — from fastcompany.com by Josephy Turow
Marketers are on the verge of using AI-powered technology to make decisions about who you are and what you want based purely on the sound of your voice.

Excerpt:

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

From DSC:
Hhhhmmm….

 

Improving Digital Inclusion & Accessibility for Those With Learning Disabilities — from by Meredith Kreisa
Learning disabilities must be taken into account during the digital design process to ensure digital inclusion and accessibility for the community. This comprehensive guide outlines common learning disabilities, associated difficulties, accessibility barriers and best practices, and more.

“Learning shouldn’t be something only those without disabilities get to do,” explains Seren Davies, a full stack software engineer and accessibility advocate who is dyslexic. “It should be for everyone. By thinking about digital accessibility, we are making sure that everyone who wants to learn can.”

“Learning disability” is a broad term used to describe several specific diagnoses. Dyslexia, dyscalculia, dysgraphia, nonverbal learning disorder, and oral/written language disorder and specific reading comprehension deficit are among the most prevalent.

 
An image of a barrier being torn down -- revealing a human mind behind it. This signifies the need to tear down any existing barriers that might hinder someone's learning experience.

 

Chrome now instantly captions audio and video on the web — from theverge.com by Ian Carlos Campbell
The accessibility feature was previously exclusive to some Pixel and Samsung Galaxy phones

Excerpt:

Google is expanding its real-time caption feature, Live Captions, from Pixel phones to anyone using a Chrome browser, as first spotted by XDA Developers. Live Captions uses machine learning to spontaneously create captions for videos or audio where none existed before, and making the web that much more accessible for anyone who’s deaf or hard of hearing.

Chrome’s Live Captions worked on YouTube videos, Twitch streams, podcast players, and even music streaming services like SoundCloud in early tests run by a few of us here at The Verge. Google also says Live Captions will work with audio and video files stored on your hard drive if they’re opened in Chrome. However, Live Captions in Chrome only work in English, which is also the case on mobile.

 

Chrome now instantly captions audio and video on the web -- this is a screen capture showing the words being said in a digital audio-based file

 

Clicking this image will take you to the 2021 Tech Trends Report -- from the Future Today Institute

14th Annual Edition | 2021 Tech Trends Report — from the Future Today Institute

Our 2021 Tech Trends Report is designed to help you confront deep uncertainty, adapt and thrive. For this year’s edition, the magnitude of new signals required us to create 12 separate volumes, and each report focuses on a cluster of related trends. In total, we’ve analyzed  nearly 500 technology and science trends across multiple industry sectors. In each volume, we discuss the disruptive forces, opportunities and strategies that will drive your organization in the near future.

Now, more than ever, your organization should examine the potential near and long-term impact of tech trends. You must factor the trends in this report into your strategic thinking for the coming year, and adjust your planning, operations and business models accordingly. But we hope you will make time for creative exploration. From chaos, a new world will come.

Some example items noted in this report:

  • Natural language processing is an area experiencing high interest, investment, and growth.
  • + No-code or low-code systems are unlocking new use cases for businesses.
  • Amazon Web Services, Azure, and Google Cloud’s low-code and no-code offerings will trickle down to everyday people, allowing them to create their own artificial intelligence applications and deploy them as easily as they could a website.
  • The race is on to capture AI cloudshare—and to become the most trusted provider of AI on remote servers.
  • COVID-19 accelerated the use of AI in drug discovery last year. The first trial of an AI-discovered drug is underway in Japan.
 

Navigating website ADA compliance: ‘If you have videos that are not captioned, you’re a sitting duck’ — from abajournal.com by Matt Reynolds

Excerpts:

“If you have videos that are not captioned, you’re a sitting duck,” Goren said. “If you’re not encoding your pictures so that the blind person using a screen reader can understand what the picture is describing, that is a problem.”

Drop-down boxes on websites are “horrible for accessibility,” the attorney added, and it can be difficult for people with disabilities to navigate CAPTCHA (Completely Automated Public Turing test) technology to verify they are human.

“Trying to get people with voice dictation or even screen readers to figure out how to certify that they’re not a robot can be very complicated,” Goren said.

Also see:

Relevant Laws

Information re: Lawsuits

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

The Top 5 Technologies for Innovation Leaders in Electronics and IT
Digital biomarkers, edge computing, and AI-enabled sensors are among the top technologies transforming the electronics landscape, according to Lux Research

BOSTON, MA, MARCH 4, 2021?– Digital transformation is one of the hottest topics in every industry, and as consumers are eagerly adopting increasing amounts of digital tech, electronics, and IT players have a unique opportunity to impact more industries than ever before. To help guide innovation in this booming space, Lux Research released its annual report, “Foresight 2021: Top Emerging Technologies to Watch.”

Lux’s annual report analyzes the digital transformation space, reviewing what topics emerged and which technologies gained traction during 2020. Its expert analysis of the hottest innovation topics and best tech startups found that the top five technologies electronics and IT innovation leaders should look to in the next decade are:

  1. AI-Enabled Sensors – Merging hardware and software to collect and validate critical data will be a major part of use cases from consumer wearables to medical devices to industrial IoT.
  2. Digital Biomarkers – Using data analytics to detect disease through changes in streams of data analytics is a potent path for electronics companies to grab a piece of the healthcare pie.
  3. Natural Language Processing – Natural language processing (NLP) allows electronics and IT players to extend into new services and industry segments, either by using it to leverage their own data or by providing it as a service.
  4. Edge Computing – Limitations in bandwidth and latency are pushing critical computation away from the cloud and out to the edge, with rapidly improving hardware and software enablers.
  5. Synthetic Data – AI needs vast amounts of training data, and when real data is scarce, synthetic data can be a solution. It also boosts data diversity and privacy.

From DSC:
Some things to keep on your radar…

 

Look at the choice and control possibilities mentioned in the following except from Immersive Reader in Canvas: Improve Reading Comprehension for All Students

When building courses and creating course content in Canvas, Immersive Reader lets users:

  • Change font size, text spacing, and background color
  • Split up words into syllables
  • Highlight verbs, nouns, adjectives, and sub-clauses
  • Choose between two fonts optimised to help with reading
  • Read text aloud
  • Change the speed of reading
  • Highlight sets of one, three, or five lines for greater focus
  • Select a word to see a related picture and hear the word read aloud as many times as necessary

Also see:

All about the Immersive Reader — from education.microsoft.com

The Microsoft Immersive Reader is a free tool, built into Word, OneNote, Outlook, Office Lens, Microsoft Teams, Forms, Flipgrid, Minecraft Education Edition and the Edge browser, that implement proven techniques to improve reading and writing for people regardless of their age or ability.

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
Videoconferencing vendors out there:

  • Have you done any focus group tests — especially within education — with audio-based or digital video-based versions of emoticons?
    .
  • So instead of clicking on an emoticon as feedback, one could also have some sound effects or movie clips to choose from as well!
    .

To the videoconferencing vendors out there -- could you give us what DJ's have access to?

I’m thinking here of things like DJ’s might have at their disposal. For example, someone tells a bad joke and you hear the drummer in the background:

Or a team loses the spelling-bee word, and hears:

Or a professor wants to get the classes attention as they start their 6pm class:

I realize this could backfire big time…so it would have to be an optional feature that a teacher, professor, trainer, pastor, or a presenter could turn on and off. (Could be fun for podcasters too!)

It seems to me that this could take
engagement to a whole new level!

 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 

Marian Croak, the inventor of VOIP, explains what it takes to innovate

The woman who created the technology behind internet calls explains what it takes to innovate — from bigthink.com by Gayle Markovitz
She’s the reason you’re able to work and chat from home.

Excerpt (emphasis DSC):

If you’ve ever wondered how a Zoom call works, you might want to ask Marian Croak, Vice-President of Engineering at Google.

This is the woman who invented “Voice over Internet Protocol”: the technology that has enabled entire workforces to continue to communicate and families and friends to remain in touch throughout 2020’s lockdowns – and inevitably beyond.

What can kids teach tech innovators?
Wonder and naivete are powerful tools. Croak argues that children have rich imaginations – which is the fuel of invention. “You need to be childlike. A little naïve and not inhibited by what’s possible.”

Matlali’s work with disadvantaged teenagers brings her directly into this world, where she sees that “children are passionate but hopeful for the future. For them, everything is possible. You want kids to have the imagination and passion for them to achieve their dreams.”

Croak said her motivation for 2021 was to keep her own childlike curiosity going, forgetting about her personal circumstances and focusing on the “painpoints”.

Also see:

Marian’s entry out at Wikipedia.org where it says:

She joined AT&T at Bell Labs in 1982.[4] She advocated for switching from wired phone technology to internet protocol.[2][5][6] She holds over two hundred patents, including over one hundred in relation to Voice over IP.[7] She pioneered the use of phone network services to make it easy for the public to donate to crisis appeals.[8][9] When AT&T partnered with American Idol to use a text message voting system, 22% of viewers learned to text to take part in the show.[10][11] She filed the patent for text-based donations to charity in 2005.[10] This capability revolutionised how people can donate money to charitable organisations:[12] for example, after the 2010 Haiti earthquake at least $22 million was pledged in this fashion.[13] She led the Domain 2.0 Architecture and managed over 2,000 engineers.[14][15]

 

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 
© 2021 | Daniel Christian