A wave of billion-dollar computer vision startups is coming — from forbes.com by Rob Toews

Excerpt:

Today, computer vision is finding applications across every sector of the economy. From agriculture to retail, from insurance to construction, entrepreneurs are applying computer vision to a wide range of industry-specific use cases with compelling economic upside.

Expect to see many computer vision startups among the next generation of “unicorns.” A crop of high-growth computer vision companies is nearing an inflection point, poised to break out to commercial scale and mainstream prominence. It is an exciting and pivotal time in the technology’s journey from research to market.

 

 

Whistleblowers: Software Bug Keeping Hundreds Of Inmates In Arizona Prisons Beyond Release Dates— from kjzz.org

Excerpt:

According to Arizona Department of Corrections whistleblowers, hundreds of incarcerated people who should be eligible for release are being held in prison because the inmate management software cannot interpret current sentencing laws.

KJZZ is not naming the whistleblowers because they fear retaliation. The employees said they have been raising the issue internally for more than a year, but prison administrators have not acted to fix the software bug. The sources said Chief Information Officer Holly Greene and Deputy Director Joe Profiri have been aware of the problem since 2019.

The Arizona Department of Corrections confirmed there is a problem with the software.

 

The Triple Threat Facing Generalist Law Firms, Part 2: Legal Tech — from jdsupra.com by Katherine Hollar Barnard

Excerpts:

In Legaltech, a Walmart associate general counsel estimated the product provided a 60 to 80 percent time savings. That’s great news for Walmart – less so for lawyers who bill by the hour.

Sterling Miller, the former general counsel of Marketo, Inc., Sabre Corporation and Travelocity.com, made a compelling case for why law firm clients are turning to technology: In-house lawyers are incentivized to find the most efficient, lowest-cost way to do things. Many law firm lawyers are incentivized to do just the opposite.

To be sure, software is unlikely to replace lawyers altogether; legal minds are essential for strategy, and robots have yet to be admitted to the bar. However, technology’s impact on an industry dominated by the billable hour will be profound.

Also see:

Judge John Tran spearheaded adoption of tech to facilitate remote hearings and helped train lawyers — from abajournal.com by Stephanie Francis Ward; with thanks to Gabe Teninbaum for this resource

Excerpt:

If you need a judge who can be counted on to research all courtroom technology offerings that can help proceedings continue during the COVID-19 pandemic, look no further than John Tran of the Fairfax County Circuit Court in Virginia.

After the Virginia Supreme Court issued an order June 22 stating that remote proceedings should be used to conduct as much business as possible, Tran offered webinars to help lawyers with the Fairfax Bar Association get up to speed with Webex, the platform the court uses for remote proceedings.

“When Webex has a news release, he’s all over that. He’s already had a private demo. He is one of a small number of exceptionally tech-savvy judges,” says Sharon Nelson, a Fairfax attorney and co-founder and president of the digital forensics firm Sensei Enterprises.

 

AI in the Legal Industry: 3 Impacts and 3 Obstacles — from exigent-group.com

Excerpt:

In this article, we’ll talk about three current impacts AI has had on the legal industry as well as three obstacles it needs to overcome before we see widespread adoption.

Consider JPMorgan’s Contract Intelligence (COIN) software. Rather than rely on lawyers to pour over their commercial loan contracts, the banking giant now uses COIN to review these documents for risk, accuracy and eligibility. Not only does this save JPMorgan 360,000 hours per year in contract review, it also results in fewer errors. One can also look to major law firms like DLA Piper, which now regularly rely on AI for M&A due diligence. At Exigent, we’ve had first-hand experience using AI to support document analysis for our clients as well.

2. Legal departments need to build cross-functional expertise
…bringing greater diversity into the legal department is essential if the efficiencies promised by AI in the legal industry are to be realized.

Fortunately, some legal departments have begun to bring data scientists on board in addition to lawyers. And the industry is beginning to open up to hybrid roles, like legal technologists, legal knowledge engineers, legal analysts and other cross-functional experts. It’s clear that a greater diversity of skills are in the legal department’s future; it’s just a matter of how smoothly the transition goes.

 

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back — from edsurge.com by Rebecca Koenig

Excerpt:

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

From DSC:
Looking at the above excerpt…what does this mean for elearning developers, learning engineers, learning experience designers, instructional designers, trainers, and more? It seems that, for such folks, learning how to use several new tools is showing up on the horizon.

 

AI and the Future of Lawyering & Law Firms – Northwestern Law and Technology Initiative — from youtube.com by Northwestern Law & Technology Initiative as moderated by Dan Linna; with thanks to Gabe Teninbaum for this resource.

Artificial Intelligence is transforming the future of work. AI has the potential to automate and augment many tasks. This transformation is leading to the creation of new roles and jobs to be done. How will AI impact the work of lawyers, legal professionals, and law firms? Our panelists will discuss the future of work, the work of lawyers and structure of law firms, and current uses of AI for legal services today.

Speakers:

  • Hyejin Youn, Assistant Professor of Management & Organizations, Kellogg School of Management, Northwestern University
  • Mari Sako, Professor of Management Studies, Saïd Business School, University of Oxford
  • Stephen Poor, Partner and chair emeritus, Seyfarth

Moderator:

  • Daniel W. Linna Jr., Senior Lecturer & Director of Law and Technology Initiatives, Northwestern Pritzker School of Law & McCormick School of Engineering
 

8 Higher Education IT Trends to Watch in 2021 [Stone]

8 Higher Education IT Trends to Watch in 2021 — from edtechmagazine.com by Adam Stone
Keep your eye on these trends as higher education prepares for a post-pandemic future.

Excerpt:

1. Get Used to More Advanced Learning Management Systems
At Virginia Tech, the Canvas learning management system (LMS) was critical for coordinating synchronous and asynchronous learning. Such systems will only become more sophisticated moving forward, says Randy Marchany, the university’s IT security officer. “With COVID, instructors have become more video savvy,” he says. “We’re all getting smarter about how we use these tools.”

2. A Rise in Sophisticated Videoconferencing Platforms
Even after the pandemic, educators might continue lecturing over Zoom and other videoconferencing platforms. However, they’ll be doing it in more sophisticated ways. “People will be making these experiences more collaborative, more authentic — with much richer interactions and conversations,” Grajek says. “We are all becoming more experienced consumers, and we will see a lot of innovation in this area.”

From DSC:
Yet another step closer…

Yet another step closer to the Learning from the Living Class Room vision

 
 

Jeff Bezos Wants to Go to the Moon. Then, Public Education. — from edsurge.com by Dominik Dresel

Excerpts:

Jeff Bezos’ $2 billion investment to establish a Montessori-inspired network of preschools may be shrugged off by many as the world’s richest man dabbling in another playground. Instead, we should see it for what it is: the early days of Amazon’s foray into public education.

It would be easy to think that Amazon’s rapid expansion into industry after industry is just the natural, opportunistic path of a cash-flush company seeking to invest in new, lucrative markets. But Jeff Bezos, himself a graduate of a Montessori preschool, doesn’t think in short-term opportunities.

Yet, the world has had its first taste of the disentanglement of schooling from school buildings. Even though in 20 years we will still have school buildings—much like we still have bookstores—there is little doubt that the future will see more, not less, online instruction and content delivery.

 

 

OpenAI’s text-to-image engine, DALL-E, is a powerful visual idea generator — from venturebeat.com by Gary Grossman; with thanks to Tim Holt for sharing this resource

OpenAI’s text-to-image engine, DALL-E, is a powerful visual idea generator

Excerpt:

OpenAI chose the name DALL-E as a hat tip to the artist Salvador Dalí and Pixar’s WALL-E. It produces pastiche images that reflect both Dalí’s surrealism that merges dream and fantasy with the everyday rational world, as well as inspiration from NASA paintings from the 1950s and 1960s and those for Disneyland Tomorrowland by Disney Imagineers.

From DSC:
I’m not a big fan of having AI create the music that I listen to, or the artwork that I take in. But I do think there’s potential here in giving creative artists some new fodder for thought! Perhaps marketers and/or journalists could also get their creative juices going from this type of service/offering.

Speaking of art, here are a couple of other postings that caught my eye recently:

This Elaborately Armored Samurai Was Folded From A Single Sheet of Paper
Also see:

 

It’s Time to Heal: 16 Trends Driving the Future of Bio and Healthcare — from a16z.com by Vineeta Agarwala, Jorge Conde, Vijay Pande, and Julie Yoo
It’s Time to Heal is a special package about engineering the future of bio and healthcare. See more at:

Also see:

5 Predictions for Digital Healthcare in 2021 — from wearable-technologies.com by Cathy Russey

Excerpts:

  1. Remote patient care and telemedicine
  2. Virtual Reality
  3. Wearables
  4. Artificial Intelligence
  5. Advancements in Electronic Health Records (EHR)
 

This avocado armchair could be the future of AI — from technologyreview.com by Will Douglas
OpenAI has extended GPT-3 with two new models that combine NLP with image recognition to give its AI a better understanding of everyday concepts.

This avocado armchair could be the future of AI OpenAI has extended GPT-3 with two new models that combine NLP with image recognition to give its AI a better understanding of everyday concepts.

“We live in a visual world,” says Ilya Sutskever, chief scientist at OpenAI. “In the long run, you’re going to have models which understand both text and images. AI will be able to understand language better because it can see what words & sentences mean.”

 

From DSC:
For me the Socratic method is still a question mark, in terms of effectiveness. (I suppose it depends on who is yielding the tool and how it’s being utilized/implemented.)

But you have one student — often standing up and/or in the spotlight — who is being drilled on something. That student could be calm and collected, and their cognitive processing could actually get a boost from the adrenaline.

But there are other students who dread being called upon in such a public — sometimes competitive — setting. Their cognitive processing could shut down or become greatly diminished.

Also, the professor is working with one student at a time — hopefully the other students are trying to address each subsequent question, but some students may tune out once they know it’s not their turn in the spotlight.

So I was wondering…could the Socratic method be used with each student at the same time? Could a polling-like tool be used in real-time to guide the discussion?

For example, a professor could start out with a pre-created poll and ask the question of all students. Then they could glance through the responses and even scan for some keywords (using their voice to drive the system and/or using a Ctrl+F / Command+F type of thing).

Then in real-time / on-the-fly, could the professor use their voice to create another poll/question — again for each student to answer — based on one of the responses? Again, each student must answer the follow up question(s).

Are there any vendors out there working on something like this? Or have you tested the effectiveness of something like this?

Vendors: Can you help us create a voice-driven interface to offer the Socratic method to everyone to see if and how it would work? (Like a Mentimeter type of product on steroids…er, rather, using an AI-driven backend.)

Teachers, trainers, pastors, presenters could also benefit from something like this — as it could engage numerous people at once.

#Participation #Engagement #Assessment #Reasoning #CriticalThinking #CommunicationSkills #ThinkingOnOnesFeet #OnlineLearning #Face-to-Face #BlendedLearning #HybridLearning

Could such a method be used in language-related classes as well? In online-based tutoring?

 

Timnit Gebru’s Exit From Google Exposes a Crisis in AI — from wired.com by Alex Hanna and Meredith Whittaker
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

Excerpt:

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

 

Could AI-based techs be used to develop a “table of contents” for the key points within lectures, lessons, training sessions, sermons, & podcasts? [Christian]

From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:

  • Artificial Intelligence (AI) — including technologies related to voice recognition
  • Blockchain
  • Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
  • Robotics
  • Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
  • Drones
  • …and other things will likely make their way into how we do many things (for better or for worse).

Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.

For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).

Like an audible version of HTML -- using the spoken word to insert the main points of a presentation or lecture

(Image purchased from iStockphoto)

.

Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:

  • “Heh Smart Classroom, Begin Main Point.”
  • Then speaks one of the main points.
  • Then says, “Heh Smart Classroom, End Main Point.”

Like a verbal version of an HTML tag.

After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.

(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)

In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply. 

Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?

Anyway, interesting times lie ahead!

 

 
© 2025 | Daniel Christian