Microsoft President Warns of Orwell’s 1984 ‘Coming to Pass’ in 2024 — from interestingengineering.com by Chris Young
Microsoft’s Brad Smith warned we may be caught up in a losing race with artificial intelligence.

Excerpt (emphasis DSC):

The surveillance-state dystopia portrayed in George Orwell’s 1984 could “come to pass in 2024” if governments don’t do enough to protect the public against artificial intelligence (AI), Microsoft president Brad Smith warned in an interview for the BBC’s investigative documentary series Panorama.

During the interview, Smith warned of China’s increasing AI prowess and the fact that we may be caught up in a losing race with the technology itself.

“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith stated.

From DSC:
This is a major heads up to all those in the legal/legislative realm — especially the American Bar Association (ABA) and the Bar Associations across the country! The ABA needs to realize they have to up their game and get with the incredibly fast pace of the twenty-first century. If that doesn’t occur, we and future generations will pay the price. Two thoughts come to my mind in regards to the ABA and for the law schools out there:

Step 1: Allow 100% online-based JD programs all the time, from here on out.

Step 2: Encourage massive new program development within all law schools to help future lawyers, judges, legislative reps, & others build up more emerging technology expertise & the ramifications thereof.

Google’s plan to make search more sentient — from vox.com by Rebecca Heilweil
Google announces new search features every year, but this time feels different.

Excerpt:

At the keynote speech of its I/O developer conference on Tuesday, Google revealed a suite of ways the company is moving forward with artificial intelligence. These advancements show Google increasingly trying to build AI-powered tools that seem more sentient and that are better at perceiving how humans actually communicate and think. They seem powerful, too.

Two of the biggest AI announcements from Google involve natural language processing and search.

Google also revealed a number of AI-powered improvements to its Maps platform that are designed to yield more helpful results and directions.

Google’s plans to bring AI to education make its dominance in classrooms more alarming — from fastcompany.com by Ben Williamson
The tech giant has expressed an ambition to transform education with artificial intelligence, raising fresh ethical questions.

Struggling to Get a Job? Artificial Intelligence Could Be the Reason Why — from newsweek.com by Lydia Veljanovski; with thanks to Sam DeBrule for the resource

Excerpt:

Except that isn’t always the case. In many instances, instead of your application being tossed aside by a HR professional, it is actually artificial intelligence that is the barrier to entry. While this isn’t a problem in itself—AI can reduce workflow by rapidly filtering applicants—the issue is that within these systems lies the possibility of bias.

It is illegal in the U.S. for employers to discriminate against a job applicant because of their race, color, sex, religion, disability, national origin, age (40 or older) or genetic information. However, these AI hiring tools are often inadvertently doing just that, and there are no federal laws in the U.S. to stop this from happening.

These Indian edtech companies are shaping the future of AI & robotics — from analyticsinsight.net by Apoorva Komarraju May 25, 2021

Excerpt:

As edtech companies have taken a lead by digitizing education for the modern era, they have taken the stance to set up Atal Tinkering Labs in schools along with other services necessary for the budding ‘kidpreneurs’. With the availability of these services, students can experience 21st-century technologies like IoT, 3D printing, AI, and Robotics.

Researchers develop machine-learning model that accurately predicts diabetes, study says — from ctvnews.ca by Christy Somos

Excerpt:

TORONTO — Canadian researchers have developed a machine-learning model that accurately predicts diabetes in a population using routinely collected health data, a new study says.

The study, published in the JAMA Network Open journal, tested new machine-learning technology on routinely collected health data that examined the entire population of Ontario. The study was run by the ICES not-for-profit data research institute.

Using linked administrative health data from Ontario from 2006 to 2016, researchers created a validated algorithm by training the model on information taken from nearly 1.7 million patients.

Project Guideline: Enabling Those with Low Vision to Run Independently — from ai.googleblog.com by Xuan Yang; with thanks to Sam DeBrule for the resource

Excerpt:

For the 285 million people around the world living with blindness or low vision, exercising independently can be challenging. Earlier this year, we announced Project Guideline, an early-stage research project, developed in partnership with Guiding Eyes for the Blind, that uses machine learning to guide runners through a variety of environments that have been marked with a painted line. Using only a phone running Guideline technology and a pair of headphones, Guiding Eyes for the Blind CEO Thomas Panek was able to run independently for the first time in decades and complete an unassisted 5K in New York City’s Central Park.

Deepfake Maps Could Really Mess With Your Sense of the World — from wired.com by Will Knight
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.

In a paper published last month, researchers altered satellite images to show buildings in Seattle where there are none.

 

Thursday, 5/20/21, is Global Accessibility Awareness Day!!!

Global Accessibility Awareness Day is this Thursday, May 20, 2021
Help us celebrate the tenth Global Accessibility Awareness Day (GAAD)! The purpose of GAAD is to get everyone talking, thinking and learning about digital access and inclusion, and the more than One Billion people with disabilities/impairments.

Global Accessibility Awareness Day is is Thursday, May 20th 2021

Also see:

Global Accessibility Awareness Day is Thursday, May 20, 2021

 

 

 

Shhhh, they’re listening: Inside the coming voice-profiling revolution — from fastcompany.com by Josephy Turow
Marketers are on the verge of using AI-powered technology to make decisions about who you are and what you want based purely on the sound of your voice.

Excerpt:

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

From DSC:
Hhhhmmm….

 

Improving Digital Inclusion & Accessibility for Those With Learning Disabilities — from inclusionhub.com by Meredith Kreisa
Learning disabilities must be taken into account during the digital design process to ensure digital inclusion and accessibility for the community. This comprehensive guide outlines common learning disabilities, associated difficulties, accessibility barriers and best practices, and more.

“Learning shouldn’t be something only those without disabilities get to do,” explains Seren Davies, a full stack software engineer and accessibility advocate who is dyslexic. “It should be for everyone. By thinking about digital accessibility, we are making sure that everyone who wants to learn can.”

“Learning disability” is a broad term used to describe several specific diagnoses. Dyslexia, dyscalculia, dysgraphia, nonverbal learning disorder, and oral/written language disorder and specific reading comprehension deficit are among the most prevalent.

An image of a barrier being torn down -- revealing a human mind behind it. This signifies the need to tear down any existing barriers that might hinder someone's learning experience.

 

Chrome now instantly captions audio and video on the web — from theverge.com by Ian Carlos Campbell
The accessibility feature was previously exclusive to some Pixel and Samsung Galaxy phones

Excerpt:

Google is expanding its real-time caption feature, Live Captions, from Pixel phones to anyone using a Chrome browser, as first spotted by XDA Developers. Live Captions uses machine learning to spontaneously create captions for videos or audio where none existed before, and making the web that much more accessible for anyone who’s deaf or hard of hearing.

Chrome’s Live Captions worked on YouTube videos, Twitch streams, podcast players, and even music streaming services like SoundCloud in early tests run by a few of us here at The Verge. Google also says Live Captions will work with audio and video files stored on your hard drive if they’re opened in Chrome. However, Live Captions in Chrome only work in English, which is also the case on mobile.

 

Chrome now instantly captions audio and video on the web -- this is a screen capture showing the words being said in a digital audio-based file

 

Clicking this image will take you to the 2021 Tech Trends Report -- from the Future Today Institute

14th Annual Edition | 2021 Tech Trends Report — from the Future Today Institute

Our 2021 Tech Trends Report is designed to help you confront deep uncertainty, adapt and thrive. For this year’s edition, the magnitude of new signals required us to create 12 separate volumes, and each report focuses on a cluster of related trends. In total, we’ve analyzed  nearly 500 technology and science trends across multiple industry sectors. In each volume, we discuss the disruptive forces, opportunities and strategies that will drive your organization in the near future.

Now, more than ever, your organization should examine the potential near and long-term impact of tech trends. You must factor the trends in this report into your strategic thinking for the coming year, and adjust your planning, operations and business models accordingly. But we hope you will make time for creative exploration. From chaos, a new world will come.

Some example items noted in this report:

  • Natural language processing is an area experiencing high interest, investment, and growth.
  • + No-code or low-code systems are unlocking new use cases for businesses.
  • Amazon Web Services, Azure, and Google Cloud’s low-code and no-code offerings will trickle down to everyday people, allowing them to create their own artificial intelligence applications and deploy them as easily as they could a website.
  • The race is on to capture AI cloudshare—and to become the most trusted provider of AI on remote servers.
  • COVID-19 accelerated the use of AI in drug discovery last year. The first trial of an AI-discovered drug is underway in Japan.
 

Navigating website ADA compliance: ‘If you have videos that are not captioned, you’re a sitting duck’ — from abajournal.com by Matt Reynolds

Excerpts:

“If you have videos that are not captioned, you’re a sitting duck,” Goren said. “If you’re not encoding your pictures so that the blind person using a screen reader can understand what the picture is describing, that is a problem.”

Drop-down boxes on websites are “horrible for accessibility,” the attorney added, and it can be difficult for people with disabilities to navigate CAPTCHA (Completely Automated Public Turing test) technology to verify they are human.

“Trying to get people with voice dictation or even screen readers to figure out how to certify that they’re not a robot can be very complicated,” Goren said.

Also see:

Relevant Laws

Information re: Lawsuits

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

The Top 5 Technologies for Innovation Leaders in Electronics and IT
Digital biomarkers, edge computing, and AI-enabled sensors are among the top technologies transforming the electronics landscape, according to Lux Research

BOSTON, MA, MARCH 4, 2021?– Digital transformation is one of the hottest topics in every industry, and as consumers are eagerly adopting increasing amounts of digital tech, electronics, and IT players have a unique opportunity to impact more industries than ever before. To help guide innovation in this booming space, Lux Research released its annual report, “Foresight 2021: Top Emerging Technologies to Watch.”

Lux’s annual report analyzes the digital transformation space, reviewing what topics emerged and which technologies gained traction during 2020. Its expert analysis of the hottest innovation topics and best tech startups found that the top five technologies electronics and IT innovation leaders should look to in the next decade are:

  1. AI-Enabled Sensors – Merging hardware and software to collect and validate critical data will be a major part of use cases from consumer wearables to medical devices to industrial IoT.
  2. Digital Biomarkers – Using data analytics to detect disease through changes in streams of data analytics is a potent path for electronics companies to grab a piece of the healthcare pie.
  3. Natural Language Processing – Natural language processing (NLP) allows electronics and IT players to extend into new services and industry segments, either by using it to leverage their own data or by providing it as a service.
  4. Edge Computing – Limitations in bandwidth and latency are pushing critical computation away from the cloud and out to the edge, with rapidly improving hardware and software enablers.
  5. Synthetic Data – AI needs vast amounts of training data, and when real data is scarce, synthetic data can be a solution. It also boosts data diversity and privacy.

From DSC:
Some things to keep on your radar…

 

Look at the choice and control possibilities mentioned in the following except from Immersive Reader in Canvas: Improve Reading Comprehension for All Students

When building courses and creating course content in Canvas, Immersive Reader lets users:

  • Change font size, text spacing, and background color
  • Split up words into syllables
  • Highlight verbs, nouns, adjectives, and sub-clauses
  • Choose between two fonts optimised to help with reading
  • Read text aloud
  • Change the speed of reading
  • Highlight sets of one, three, or five lines for greater focus
  • Select a word to see a related picture and hear the word read aloud as many times as necessary

Also see:

All about the Immersive Reader — from education.microsoft.com

The Microsoft Immersive Reader is a free tool, built into Word, OneNote, Outlook, Office Lens, Microsoft Teams, Forms, Flipgrid, Minecraft Education Edition and the Edge browser, that implement proven techniques to improve reading and writing for people regardless of their age or ability.

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

From DSC:
I was reviewing an edition of Dr. Barbara Honeycutt’s Lecture Breakers Weekly, where she wrote:

After an experiential activity, discussion, reading, or lecture, give students time to write the one idea they took away from the experience. What is their one takeaway? What’s the main idea they learned? What do they remember?

This can be written as a reflective blog post or journal entry, or students might post it on a discussion board so they can share their ideas with their colleagues. Or, they can create an audio clip (podcast), video, or drawing to explain their One Takeaway.

From DSC:
This made me think of tools like VoiceThread — where you can leave a voice/audio message, an audio/video-based message, a text-based entry/response, and/or attach other kinds of graphics and files.

That is, a multimedia-based exit ticket. It seems to me that this could work in online- as well as blended-based learning environments.


Addendum on 2/7/21:

How to Edit Live Photos to Make Videos, GIFs & More! — from jonathanwylie.com


 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 

Teaching with Amazon Alexa — from Sylvia Martinez

Excerpt:

Alexa is a voice-activated, cloud-based virtual assistant, similar to Siri on Apple devices, or Google Assistant. Alexa is an umbrella name for the cloud-based functionality that responds to verbal commands. Alexa uses artificial intelligence to answer questions or control smart devices, and has a range of “skills” — small programs that you can add to increase Alexa’s capabilities.

Many teachers are experimenting with using smart devices like Alexa in the classroom. Like most other Amazon features and products, Alexa is primarily designed for home use, anticipating that users will be household members. So in thinking about Alexa in a classroom, keeping this in mind will help determine the best educational uses.

Alexa is most often accessed in three ways…

 
© 2024 | Daniel Christian