Shhhh, they’re listening: Inside the coming voice-profiling revolution — from fastcompany.com by Josephy Turow
Marketers are on the verge of using AI-powered technology to make decisions about who you are and what you want based purely on the sound of your voice.

Excerpt:

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

From DSC:
Hhhhmmm….

 

Improving Digital Inclusion & Accessibility for Those With Learning Disabilities — from inclusionhub.com by Meredith Kreisa
Learning disabilities must be taken into account during the digital design process to ensure digital inclusion and accessibility for the community. This comprehensive guide outlines common learning disabilities, associated difficulties, accessibility barriers and best practices, and more.

“Learning shouldn’t be something only those without disabilities get to do,” explains Seren Davies, a full stack software engineer and accessibility advocate who is dyslexic. “It should be for everyone. By thinking about digital accessibility, we are making sure that everyone who wants to learn can.”

“Learning disability” is a broad term used to describe several specific diagnoses. Dyslexia, dyscalculia, dysgraphia, nonverbal learning disorder, and oral/written language disorder and specific reading comprehension deficit are among the most prevalent.

An image of a barrier being torn down -- revealing a human mind behind it. This signifies the need to tear down any existing barriers that might hinder someone's learning experience.

 

Chrome now instantly captions audio and video on the web — from theverge.com by Ian Carlos Campbell
The accessibility feature was previously exclusive to some Pixel and Samsung Galaxy phones

Excerpt:

Google is expanding its real-time caption feature, Live Captions, from Pixel phones to anyone using a Chrome browser, as first spotted by XDA Developers. Live Captions uses machine learning to spontaneously create captions for videos or audio where none existed before, and making the web that much more accessible for anyone who’s deaf or hard of hearing.

Chrome’s Live Captions worked on YouTube videos, Twitch streams, podcast players, and even music streaming services like SoundCloud in early tests run by a few of us here at The Verge. Google also says Live Captions will work with audio and video files stored on your hard drive if they’re opened in Chrome. However, Live Captions in Chrome only work in English, which is also the case on mobile.

 

Chrome now instantly captions audio and video on the web -- this is a screen capture showing the words being said in a digital audio-based file

 

Clicking this image will take you to the 2021 Tech Trends Report -- from the Future Today Institute

14th Annual Edition | 2021 Tech Trends Report — from the Future Today Institute

Our 2021 Tech Trends Report is designed to help you confront deep uncertainty, adapt and thrive. For this year’s edition, the magnitude of new signals required us to create 12 separate volumes, and each report focuses on a cluster of related trends. In total, we’ve analyzed  nearly 500 technology and science trends across multiple industry sectors. In each volume, we discuss the disruptive forces, opportunities and strategies that will drive your organization in the near future.

Now, more than ever, your organization should examine the potential near and long-term impact of tech trends. You must factor the trends in this report into your strategic thinking for the coming year, and adjust your planning, operations and business models accordingly. But we hope you will make time for creative exploration. From chaos, a new world will come.

Some example items noted in this report:

  • Natural language processing is an area experiencing high interest, investment, and growth.
  • + No-code or low-code systems are unlocking new use cases for businesses.
  • Amazon Web Services, Azure, and Google Cloud’s low-code and no-code offerings will trickle down to everyday people, allowing them to create their own artificial intelligence applications and deploy them as easily as they could a website.
  • The race is on to capture AI cloudshare—and to become the most trusted provider of AI on remote servers.
  • COVID-19 accelerated the use of AI in drug discovery last year. The first trial of an AI-discovered drug is underway in Japan.
 

Navigating website ADA compliance: ‘If you have videos that are not captioned, you’re a sitting duck’ — from abajournal.com by Matt Reynolds

Excerpts:

“If you have videos that are not captioned, you’re a sitting duck,” Goren said. “If you’re not encoding your pictures so that the blind person using a screen reader can understand what the picture is describing, that is a problem.”

Drop-down boxes on websites are “horrible for accessibility,” the attorney added, and it can be difficult for people with disabilities to navigate CAPTCHA (Completely Automated Public Turing test) technology to verify they are human.

“Trying to get people with voice dictation or even screen readers to figure out how to certify that they’re not a robot can be very complicated,” Goren said.

Also see:

Relevant Laws

Information re: Lawsuits

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

The Top 5 Technologies for Innovation Leaders in Electronics and IT
Digital biomarkers, edge computing, and AI-enabled sensors are among the top technologies transforming the electronics landscape, according to Lux Research

BOSTON, MA, MARCH 4, 2021?– Digital transformation is one of the hottest topics in every industry, and as consumers are eagerly adopting increasing amounts of digital tech, electronics, and IT players have a unique opportunity to impact more industries than ever before. To help guide innovation in this booming space, Lux Research released its annual report, “Foresight 2021: Top Emerging Technologies to Watch.”

Lux’s annual report analyzes the digital transformation space, reviewing what topics emerged and which technologies gained traction during 2020. Its expert analysis of the hottest innovation topics and best tech startups found that the top five technologies electronics and IT innovation leaders should look to in the next decade are:

  1. AI-Enabled Sensors – Merging hardware and software to collect and validate critical data will be a major part of use cases from consumer wearables to medical devices to industrial IoT.
  2. Digital Biomarkers – Using data analytics to detect disease through changes in streams of data analytics is a potent path for electronics companies to grab a piece of the healthcare pie.
  3. Natural Language Processing – Natural language processing (NLP) allows electronics and IT players to extend into new services and industry segments, either by using it to leverage their own data or by providing it as a service.
  4. Edge Computing – Limitations in bandwidth and latency are pushing critical computation away from the cloud and out to the edge, with rapidly improving hardware and software enablers.
  5. Synthetic Data – AI needs vast amounts of training data, and when real data is scarce, synthetic data can be a solution. It also boosts data diversity and privacy.

From DSC:
Some things to keep on your radar…

 

Look at the choice and control possibilities mentioned in the following except from Immersive Reader in Canvas: Improve Reading Comprehension for All Students

When building courses and creating course content in Canvas, Immersive Reader lets users:

  • Change font size, text spacing, and background color
  • Split up words into syllables
  • Highlight verbs, nouns, adjectives, and sub-clauses
  • Choose between two fonts optimised to help with reading
  • Read text aloud
  • Change the speed of reading
  • Highlight sets of one, three, or five lines for greater focus
  • Select a word to see a related picture and hear the word read aloud as many times as necessary

Also see:

All about the Immersive Reader — from education.microsoft.com

The Microsoft Immersive Reader is a free tool, built into Word, OneNote, Outlook, Office Lens, Microsoft Teams, Forms, Flipgrid, Minecraft Education Edition and the Edge browser, that implement proven techniques to improve reading and writing for people regardless of their age or ability.

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
I was reviewing an edition of Dr. Barbara Honeycutt’s Lecture Breakers Weekly, where she wrote:

After an experiential activity, discussion, reading, or lecture, give students time to write the one idea they took away from the experience. What is their one takeaway? What’s the main idea they learned? What do they remember?

This can be written as a reflective blog post or journal entry, or students might post it on a discussion board so they can share their ideas with their colleagues. Or, they can create an audio clip (podcast), video, or drawing to explain their One Takeaway.

From DSC:
This made me think of tools like VoiceThread — where you can leave a voice/audio message, an audio/video-based message, a text-based entry/response, and/or attach other kinds of graphics and files.

That is, a multimedia-based exit ticket. It seems to me that this could work in online- as well as blended-based learning environments.


Addendum on 2/7/21:

How to Edit Live Photos to Make Videos, GIFs & More! — from jonathanwylie.com


 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 

Teaching with Amazon Alexa — from Sylvia Martinez

Excerpt:

Alexa is a voice-activated, cloud-based virtual assistant, similar to Siri on Apple devices, or Google Assistant. Alexa is an umbrella name for the cloud-based functionality that responds to verbal commands. Alexa uses artificial intelligence to answer questions or control smart devices, and has a range of “skills” — small programs that you can add to increase Alexa’s capabilities.

Many teachers are experimenting with using smart devices like Alexa in the classroom. Like most other Amazon features and products, Alexa is primarily designed for home use, anticipating that users will be household members. So in thinking about Alexa in a classroom, keeping this in mind will help determine the best educational uses.

Alexa is most often accessed in three ways…

 

DC: You want to talk about learning ecosystems?!!? Check out the scopes included in this landscape from HolonIQ!

You want to talk about learning ecosystems?!!? Check this landscape out from HolonIQ!

Also see:

Education in 2030 -- a $10T market -- from HolonIQ.com

From DSC:
If this isn’t mind-blowing, I don’t know what is! Some serious morphing lies ahead of us!

 

Artificial Intelligence for Learning: How to use AI to Support Employee Development [Donald Clark]

So what is the book about? — from donaldclarkplanb.blogspot.com by Donald Clark; which discusses his book entitled, Artificial Intelligence for Learning: How to use AI to Support Employee Development

Excerpt:

AI changes everything. It changes how we work, shop, travel, entertain ourselves, socialize, deal with finance and healthcare. When online, AI mediates almost everything – Google, Google Scholar, YouTube, Facebook, Twitter, Instagram, TikTok, Amazon, Netflix. It would be bizarre to imagine that AI will have no role to play in learning – it already has.

Both informally and formally, AI is now embedded in many of the tools real learners use for online learning – we search for knowledge using AI (Google, Google Scholar), we search for practical knowledge using AI (YouTube), Duolingo for languages, and CPD is becoming common on social media, almost all mediated by AI. It is everywhere, just largely invisible. This book is partly about the role of AI in informal learning but it is largely about its existing and potential role in formal learning – in schools, Universities and the workplace. AI changes the world, so it changes why we learn, what we learn and how we learn.

Also see:

  • Abandon lectures: increase attendance, attitudes and attainment — from donaldclarkplanb.blogspot.com by Donald Clark
    Excerpt:
    The groups were taught a module in a physics course, in three one hour sessions in one week. In short; attendance increased, measured attitudes were better (students enjoyed the experience (90%) and thought that the whole course would be better if taught this way (77%)). More importantly students in the experimental group outperformed the control group, doing more than twice as well in assessment than the control group.
 
© 2021 | Daniel Christian