From DSC:
I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.

It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets.  Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.

One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)).  If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.

For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.

And/or perhaps this is a feature in our future videoconferencing applications.

But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.

Along these lines, see:

.

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

 

Get Ready to Relearn How to Use the Internet — from bloomberg.com by Tyle Cowen; with thanks to Sam DeBrule for this resource
Everyone knows that an AI revolution is coming, but no one seems to realize how profoundly it will change their day-to-day life.

Excerpts:

This year has brought a lot of innovation in artificial intelligence, which I have tried to keep up with, but too many people still do not appreciate the import of what is to come. I commonly hear comments such as, “Those are cool images, graphic designers will work with that,” or, “GPT-3 is cool, it will be easier to cheat on term papers.” And then they end by saying: “But it won’t change my life.”

This view is likely to be proven wrong — and soon, as AI is about to revolutionize our entire information architecture. You will have to learn how to use the internet all over again.

Change is coming. Consider Twitter, which I use each morning to gather information about the world. Less than two years from now, maybe I will speak into my computer, outline my topics of interest, and somebody’s version of AI will spit back to me a kind of Twitter remix, in a readable format and tailored to my needs.

The AI also will be not only responsive but active. Maybe it will tell me, “Today you really do need to read about Russia and changes in the UK government.” Or I might say, “More serendipity today, please,” and that wish would be granted.

Of course all this is just one man’s opinion. If you disagree, in a few years you will be able to ask the new AI engines what they think.

Some other recent items from Sam DeBrule include:

Natural Language Assessment: A New Framework to Promote Education — from ai.googleblog.com by Kedem Snir and Gal Elidan

Excerpt:

In this blog, we introduce an important natural language understanding (NLU) capability called Natural Language Assessment (NLA), and discuss how it can be helpful in the context of education. While typical NLU tasks focus on the user’s intent, NLA allows for the assessment of an answer from multiple perspectives. In situations where a user wants to know how good their answer is, NLA can offer an analysis of how close the answer is to what is expected. In situations where there may not be a “correct” answer, NLA can offer subtle insights that include topicality, relevance, verbosity, and beyond. We formulate the scope of NLA, present a practical model for carrying out topicality NLA, and showcase how NLA has been used to help job seekers practice answering interview questions with Google’s new interview prep tool, Interview Warmup.

How AI could help translate extreme weather alerts — from axios.com by Ayurella Horn-Muller

Excerpt:

A startup that provides AI-powered translation is working with the National Weather Service to improve language translations of extreme weather alerts across the U.S.

Using GPT-3 to augment human intelligence — from escapingflatland.substack.com by Henrik Karlsson

Excerpt:

When I’ve been doing this with GPT-3, a 175 billion parameter language model, it has been uncanny how much it reminds me of blogging. When I’m writing this, from March through August 2022, large language models are not yet as good at responding to my prompts as the readers of my blog. But their capacity is improving fast and the prices are dropping.

Soon everyone can have an alien intelligence in their inbox.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Why text-to-speech tools might have a place in your classroom with Dr. Kirsten Kohlmeyer – Easy TeTech Podcast 183 — from classtechtips.com by Monica Burns

Excerpt:

In this episode, Assistive Technology Director, Dr. Kirsten Kohlmeyer, joins to discuss the power of accessibility and text-to-speech tools in classroom environments. You’ll also hear plenty of digital resources to check out for text-to-speech options, audiobooks, and more!

Assistive tools can provide:

  • Text-to-speech
  • Definitions/vocabularies
  • Ability to level the Lexile level of a reading
  • Capability to declutter a website
  • More chances to read to learn something new
  • and more

Speaking of tools, also see:

 

Radar Trends to Watch: October 2022 — from oreilly.com by Mike Loukides
Developments in Machine Learning, Metaverse, Web3, and More

Excerpt:

September was a busy month. In addition to continued fascination over art generation with DALL-E and friends, and the questions they pose for intellectual property, we see interesting things happening with machine learning for low-powered processors: using attention, mechanisms, along with a new microcontroller that can run for a week on a single AA battery. In other parts of the technical universe, “platform engineering” has been proposed as an alternative to both DevOps and SRE. We’ve seen demonstrations of SQL injection-like attacks against GPT-3; and companies including Starbucks, Chipotle, and Universal Studios are offering NFT-based loyalty programs. (In addition to a Chipotle’s steak grilling demo in the Metaverse.)

Also relevant/see:

General AI News — from essentials.news

 

Keynote Wrap-Up: NVIDIA CEO Unveils Next-Gen RTX GPUs, AI Workflows in the Cloud — from blogs.nvidia.com by Brian Caulfield
Kicking off GTC, Jensen Huang unveils advances in natural language understanding, the metaverse, gaming and AI technologies impacting industries from transportation and healthcare to finance and entertainment.

Excerpt (emphasis DSC):

New cloud services to support AI workflows and the launch of a new generation of GeForce RTX GPUs featured [on 9/20/22] in NVIDIA CEO Jensen Huang’s GTC keynote, which was packed with new systems, silicon, and software.

“Computing is advancing at incredible speeds, the engine propelling this rocket is accelerated computing, and its fuel is AI,” Huang said during a virtual presentation as he kicked off NVIDIA GTC.

Again and again, Huang connected new technologies to new products to new opportunities – from harnessing AI to delight gamers with never-before-seen graphics to building virtual proving grounds where the world’s biggest companies can refine their products.

Driving the deluge of new ideas, new products and new applications: a singular vision of accelerated computing unlocking advances in AI, which, in turn will touch industries around the world.

Also relevant/see:

 

Radar Trends to Watch: September 2022 Developments in AI, Privacy, Biology, and More — from oreilly.com by Mike Loukides

Excerpt:

It’s hardly news to talk about the AI developments of the last month. DALL-E is increasingly popular, and being used in production. Google has built a robot that incorporates a large language model so that it can respond to verbal requests. And we’ve seen a plausible argument that natural language models can be made to reflect human values, without raising the question of consciousness or sentience.

For the first time in a long time we’re talking about the Internet of Things. We’ve got a lot of robots, and Chicago is attempting to make a “smart city” that doesn’t facilitate surveillance. We’re also seeing a lot in biology. Can we make a real neural network from cultured neurons? The big question for biologists is how long it will take for any of their research to make it out of the lab.

 

I think we’ve run out of time to effectively practice law in the United States of America [Christian]


From DSC:
Given:

  • the accelerating pace of change that’s been occurring over the last decade or more
  • the current setup of the legal field within the U.S. — and who can practice law
  • the number of emerging technologies now on the landscapes out there

…I think we’ve run out of time to effectively practice law in the U.S. — at least in terms of dealing with emerging technologies. Consider the following items/reflections.


Inside one of the nation’s few hybrid J.D. programs — from highereddive.com by Natalie Schwartz
Shannon Gardner, Syracuse law school’s associate dean for online education, talks about the program’s inaugural graduates and how it has evolved.

Excerpt (emphasis DSC):

In May, Syracuse University’s law school graduated its first class of students earning a Juris Doctor degree through a hybrid program, called JDinteractive, or JDi. The 45 class members were part of almost 200 Syracuse students who received a J.D. this year, according to a university announcement.

The private nonprofit, located in upstate New York, won approval from the American Bar Association in 2018 to offer the three-year hybrid program.

The ABA strictly limits distance education, requiring a waiver for colleges that wish to offer more than one-third of their credits online. To date, the ABA has only approved distance education J.D. programs at about a dozen schools, including Syracuse.

Many folks realize this is the future of legal education — not that it will replace traditional programs. It is one route to pursue a legal education that is here to stay. I did not see it as pressure, and I think, by all accounts, we have definitely proven that it is and can be a success.

Shannon Gardner, associate dean for online education  


From DSC:
It was March 2018. I just started working as a Director of Instructional Services at a law school. I had been involved with online-based learning since 2001.

I was absolutely shocked at how far behind law schools were in terms of offering 100% online-based programs. I was dismayed to find out that 20+ years after such undergraduate programs were made available — and whose effectiveness had been proven time and again — that there were no 100%-online based Juris Doctor (JD) programs in the U.S. (The JD degree is what you have to have to practice law in the U.S. Some folks go on to take further courses after obtaining that degree — that’s when Masters of Law programs like LLM programs kick in.)

Why was this I asked? Much of the answer lies with the extremely tight control that is exercised by the American Bar Association (ABA). They essentially lay down the rules for how much of a law student’s training can be online (normally not more than a third of one’s credit hours, by the way).

Did I say it’s 2022? And let me say the name of that organization again — the American Bar Association (ABA).

Graphic by Daniel S. Christian

Not to scare you (too much), but this is the organization that is supposed to be in charge of developing lawyers who are already having to deal with issues and legal concerns arising from the following technologies:

  • Artificial Intelligence (AI) — Machine Learning (ML), Natural Language Processing (NLP), algorithms, bots, and the like
  • The Internet of Things (IoT) and/or the Internet of Everything (IoE)
  • Extended Reality (XR) — Augmented Reality (AR), Mixed Reality (MR), Virtual Reality (VR)
  • Holographic communications
  • Big data
  • High-end robotics
  • The Metaverse
  • Cryptocurrencies
  • NFTs
  • Web3
  • Blockchain
  • …and the like

I don’t think there’s enough time for the ABA — and then law schools — to reinvent themselves. We no longer have that luxury. (And most existing/practicing lawyers don’t have the time to get up the steep learning curves involved here — in addition to their current responsibilities.)

The other option is to use teams of specialists, That’s our best hope. If the use of what’s called nonlawyers* doesn’t increase greatly, the U.S. has little hope of dealing with legal matters that are already arising from such emerging technologies. 

So let’s hope the legal field catches up with the pace of change that’s been accelerating for years now. If not, we’re in trouble.

* Nonlawyers — not a very complimentary term…
I hope they come up with something else.
Some use the term Paralegals.
I’m sure there are other terms as well. 


From DSC:
There is hope though. As Gabe Teninbaum just posted the resource below (out on Twitter). I just think the lack of responsiveness from the ABA has caught up with us. We’ve run out of time for doing “business as usual.”

Law students want more distance education classes, according to ABA findings — from abajournal.com by Stephanie Francis Ward

Excerpt:

A recent survey of 1,394 students in their third year of law school found that 68.65% wanted the ability to earn more distance education credits than what their schools offered.


 


Ways that artificial intelligence is revolutionizing education — from thetechedvocate.org by Matthew Lynch

Excerpt:

I was speaking with an aging schoolteacher who believes that AI is destroying education. They challenged me to come up with 26 ways that artificial intelligence (AI) is improving education, and instead, I came up with. They’re right here.


AI Startup Speeds Healthcare Innovations To Save Lives — from by Geri Stengel

Excerpt:

This project was a light-bulb moment for her. The financial industry had Bloomberg to analyze content and data to help investors uncover opportunities and minimize risk, and pharmaceutical, biotech, and medical device companies needed something similar.



 

The Future of Education | By Futurist Gerd Leonhard | A Video for EduCanada — from futuristgerd.com

Per Gerd:

Recently, I was invited by the Embassy of Canada in Switzerland to create this special presentation and promotional video discussing the Future of Education and to explore how Canada might be leading the way. Here are some of the key points I spoke about in the video. Watch the whole thing here: the Future of Education.

 

…because by 2030, I believe, the traditional way of learning — just in case — you know storing, downloading information will be replaced by learning just in time, on-demand, learning to learn, unlearning, relearning, and the importance of being the right person. Character skills, personality skills, traits, they may very well rival the value of having the right degree.

If you learn like a robot…you’ll never have a job to begin with.

Gerd Leonhard


Also relevant/see:

The Next 10 Years: Rethinking Work and Revolutionising Education (Gerd Leonhard’s keynote in Riga) — from futuristgerd.com


 

Will Learning Move into the Metaverse? — from learningsolutionsmag.com by Pamela Hogle

Excerpt:

In its 2022 Tech Trends report, the Future Today Institute predicts that, “The future of work will become more digitally immersive as companies deploy virtual meeting platforms, digital experiences, and mixed reality worlds.”

Learning leaders are likely to spearhead the integration of their organizations’ workers into a metaverse, whether by providing training in using the tools that make a metaverse possible or through developing training and performance support resources that learners will use in an immersive environment.

Advantages of moving some workplace collaboration and learning into a metaverse include ease of scaling and globalization. The Tech Trends report mentions personalization at scale and easy multilingual translation as advantages of “synthetic media”—algorithmically generated digital content, which could proliferate in metaverses.

Also see:

Future Institute Today -- Tech Trends 2022


Also from learningsolutionsmag.com, see:

Manage Diverse Learning Ecosystems with Federated Governance

Excerpt:

So, over time, the L&D departments eventually go back to calling their own shots.

What does this mean for the learning ecosystem? If each L&D team chooses its own learning platforms, maintenance and support will be a nightmare. Each L&D department may be happy with the autonomy but learners have no patience for navigating multiple LMSs or going to several systems to get their training records.

Creating common infrastructure among dispersed groups
Here you have the problem: How can groups that have no accountability to each other share a common infrastructure?

 

From DSC:
Wow…I hadn’t heard of voice banking before. This was an interesting item from multiple perspectives.

Providing a creative way for people with Motor Neurone Disease to bank their voices, I Will Always Be Me is a dynamic and heartfelt publication — from itsnicethat.com by Olivia Hingley
Speaking to the project’s illustrator and creative director, we discover how the book aims to be a tool for family and loved ones to discuss and come to terms with the diagnosis.

Excerpt:

Whilst voice banking technology is widely available to those suffering from MND, Tal says that the primary problem is “that not enough people are banking their voice because the process is long, boring and solitary. People with MND don’t want to sit in a lonely room to record random phrases and sentences; they already have a lot to deal with.” Therefore, many people only realise or interact with the importance of voice banking when their voice has already deteriorated. “So,” Tal expands, “the brief we got was: turn voice banking into something that people will want to do as soon as they’re diagnosed.”

 

AI research is a dumpster fire and Google’s holding the matches — from thenextweb.com by Tristan Greene
Scientific endeavor is no match for corporate greed

Excerpts:

The world of AI research is in shambles. From the academics prioritizing easy-to-monetize schemes over breaking novel ground, to the Silicon Valley elite using the threat of job loss to encourage corporate-friendly hypotheses, the system is a broken mess.

And Google deserves a lion’s share of the blame.

Google, more than any other company, bears responsibility for the modern AI paradigm. That means we need to give big G full marks for bringing natural language processing and image recognition to the masses.

It also means we can credit Google with creating the researcher-eat-researcher environment that has some college students and their big-tech-partnered professors treating research papers as little more than bait for venture capitalists and corporate headhunters.

But the system’s set up to encourage the monetization of algorithms first, and to further the field second. In order for this to change, big tech and academia both need to commit to wholesale reform in how research is presented and reviewed.

Also relevant/see:

Every month Essentials publish an Industry Trend Report on AI in general and the following related topics:

  • AI Research
  • AI Applied Use Cases
  • AI Ethics
  • AI Robotics
  • AI Marketing
  • AI Cybersecurity
  • AI Healthcare

It’s never too early to get your AI ethics right — from protocol.com by Veronica Irwin
The Ethical AI Governance Group wants to give startups a framework for avoiding scandals and blunders while deploying new technology.

Excerpt:

To solve this problem, a group of consultants, venture capitalists and executives in AI created the Ethical AI Governance Group last September. In March, it went public, and published a survey-style “continuum” for investors to use in advising the startups in their portfolio.

The continuum conveys clear guidance for startups at various growth stages, recommending that startups have people in charge of AI governance and data privacy strategy, for example. EAIGG leadership argues that using the continuum will protect VC portfolios from value-destroying scandals.

 

Radar trends to watch: May 2022 — from oreilly.com
Developments in Web3, Security, Biology, and More

Excerpt:

April was the month for large language models. There was one announcement after another; most new models were larger than the previous ones, several claimed to be significantly more energy efficient.

 

Remote court transcription technology enables virtual court appearances — from abajournal.com by Nicole Black

Excerpts:

That’s why it’s imperative to make certain remote options are available for all aspects of legal work since doing so is the only way to guarantee the justice system doesn’t come to a grinding halt. One way to prevent that is to take advantage of the virtual deposition transcription tools I discussed in last month’s column. In that article, I provided an overview of virtual deposition transcription products and services that rely on videoconferencing tools and software platforms to facilitate remote depositions.

Another way business continuity has been maintained since March 2020 is via virtual court proceedings. Remote court appearances are now more common since courts periodically shifted to partial or fully remote operations throughout the pandemic. Many judges have become accustomed to and appreciate the convenience of virtual court proceedings, and many expect them to continue even after the pandemic ends.

Because all signs point to the continuation of virtual court proceedings, I promised in last month’s article that I would focus on remote court proceeding options in this column. These include software platforms and artificial intelligence language-processing tools that facilitate remote court proceedings.

Nicole’s article mentioned the following vendor/product:

Live Litigation -- Remote Solutions for Attending and Participating in Depositions, Trials, Hearings, Arbitrations, Mediations, Witness Prep, and more.

 
© 2022 | Daniel Christian