From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

Why Kindness at Work Pays Off — from hbr.org by Andrew Swinand; via Roberto Ferraro

Summary:
Whether you’re just entering the workforce, starting a new job, or transitioning into people management, kindness can be a valuable attribute that speaks volumes about your character, commitment, and long-term value. Here are a few simple routines you can integrate into your everyday work life that will spread kindness and help create a culture of kindness at your organization.

  • Practice radical self-care. The best way to be a valuable, thoughtful team member is to be disciplined about your own wellness — your physical, emotional, and mental well-being.
  • Do your job. Start with the basics by showing up on time and doing your job to the best of your ability. This is where your self-care practice comes into play — you can’t do your best work without taking care of yourself first.
  • Reach out to others with intention. Make plans to meet virtually or, even better, in person with your colleagues. Ask about their pets, their recent move, or their family. Most importantly, practice active listening.
  • Recognize and acknowledge people. Authentic, thoughtful interactions show that you’re thinking about the other person and reflecting on their unique attributes and value, which can cement social connections.
  • Be conscientious with your feedback. Being kind means offering feedback for the betterment of the person receiving it and the overall success of your company.

“When anxiety is high and morale is low, kindness isn’t a luxury — it’s a necessity. With mass layoffs, economic uncertainty, and geopolitical tensions, kindness is needed now more than ever, especially at work.”

 

41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids — from washingtonpost.com by Cristiano Lima and Naomi Nix
The action marks the most sprawling state challenge to date over social media’s impact on children’s mental health

Forty-one states and the District of Columbia are suing Meta, alleging that the tech giant harms children by building addictive features into Instagram and Facebook. Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health.

 

How can a young person stay on the path of purity?
    By living according to your word.
10 I seek you with all my heart;
    do not let me stray from your commands.
11 I have hidden your word in my heart
    that I might not sin against you.
12 Praise be to you, Lord;
    teach me your decrees.

Proverbs 19:20-21

20 Listen to advice and accept discipline,
and at the end you will be counted among the wise.
21 Many are the plans in a person’s heart,
but it is the Lord’s purpose that prevails.

Jeremiah 18:7-10

If at any time I announce that a nation or kingdom is to be uprooted, torn down and destroyed, and if that nation I warned repents of its evil, then I will relent and not inflict on it the disaster I had planned. And if at another time I announce that a nation or kingdom is to be built up and planted, 10 and if it does evil in my sight and does not obey me, then I will reconsider the good I had intended to do for it.

You have laid down precepts
    that are to be fully obeyed.
Oh, that my ways were steadfast
    in obeying your decrees!
Then I would not be put to shame
    when I consider all your commands.

 

Comparing Online and AI-Assisted Learning: A Student’s View — from educationnext.org by Daphne Goldstein
An 8th grader reviews traditional Khan Academy and its AI-powered tutor, Khanmigo

Hi everyone, I’m Daphne, a 13-year-old going into 8th grade.

I’m writing to compare “regular” Khan Academy (no AI) to Khanmigo (powered by GPT4), using three of my own made-up criteria.

They are: efficiency, effectiveness, and enjoyability. Efficiency is how fast I am able to cover a math topic and get basic understanding. Effectiveness is my quality of understanding—the difference between basic and advanced understanding. And the final one—most important to kids and maybe least important to adults who make kids learn math—is enjoyability.


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


First Impressions with GPT-4V(ision) — from blog.roboflow.com by James Gallagher; via Donald Clark on LinkedIn

On September 25th, 2023, OpenAI announced the rollout of two new features that extend how people can interact with its recent and most advanced model, GPT-4: the ability to ask questions about images and to use speech as an input to a query.

This functionality marks GPT-4’s move into being a multimodal model. This means that the model can accept multiple “modalities” of input – text and images – and return results based on those inputs. Bing Chat, developed by Microsoft in partnership with OpenAI, and Google’s Bard model both support images as input, too. Read our comparison post to see how Bard and Bing perform with image inputs.

In this guide, we are going to share our first impressions with the GPT-4V image input feature.


 

Don’t Be Fooled: How You Can Master Media Literacy in the Digital Age — from youtube.com by Professor Sue Ellen Christian

During this special keynote presentation, Western Michigan University (WMU) professor Sue Ellen Christian speaks about the importance of media literacy for all ages and how we can help educate our friends and families about media literacy principles. Hosted by the Grand Rapids Public Library and GRTV, a program of the Grand Rapids Community Media Center. Special thanks to the Grand Rapids Public Library Foundation for their support of this program.

Excerpts:

Media Literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. Center for Media Literacy

5 things to do when confronted with concerns about content.


Also relevant/see:

Kalamazoo Valley Museum’s newest exhibit teaches community about media literacy — from mlive.com by Gabi Broekema

 

2 Chronicles 15:2

He went out to meet Asa and said to him, “Listen to me, Asa and all Judah and Benjamin. The LORD is with you when you are with him. If you seek him, he will be found by you, but if you forsake him, he will forsake you.

1 Corinthians 15:3-8

3 For what I received I passed on to you as of first importance[a]: that Christ died for our sins according to the Scriptures, 4 that he was buried, that he was raised on the third day according to the Scriptures, 5 and that he appeared to Cephas, and then to the Twelve. 6 After that, he appeared to more than five hundred of the brothers and sisters at the same time, most of whom are still living, though some have fallen asleep. 7 Then he appeared to James, then to all the apostles, 8 and last of all he appeared to me also, as to one abnormally born.

John 6:29

Jesus answered, “The work of God is this: to believe in the one he has sent.”

Psalms 103:1-5

Praise the LORD, my soul; all my inmost being, praise his holy name. Praise the LORD, my soul, and forget not all his benefits— who forgives all your sins and heals all your diseases, who redeems your life from the pit and crowns you with love and compassion, who satisfies your desires with good things so that your youth is renewed like the eagle’s.

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

AI for Education Webinars — from youtube.com by Tom Barrett and others

AI for education -- a webinar series by Tom Barrett and company


Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A simple, three-step guide on how to design assessments in a post-AI world

Excerpt:

Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.

They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.

Step 1 -- Write Inquiry-Based Objectives

.


Massive Disruption Now: What AI Means for Students, Educators, Administrators and Accreditation Boards
— from stefanbauschard.substack.com by Stefan Bauschard; via Will Richardson on LinkedIn
The choices many colleges and universities make regarding AI over the next 9 months will determine if they survive. The same may be true for schools.

Excerpts:

Just for a minute, consider how education would change if the following were true

  • AIs “hallucinated” less than humans
  • AIs could write in our own voices
  • AIs could accurately do math
  • AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
  • AIs could teach anything any student wanted or need to know any time of day or night
  • AIs could do this at a fraction of the cost of a human teacher or professor

Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920.  If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820.  AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”


On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander
A new practice with pedagogical possibilities

Excerpt:

Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?

How might this play out in a college or university class?

Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:


Generative Textbooks — from opencontent.org by David Wiley

Excerpt (emphasis DSC):

I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model. These prompts could help learners ask for things like:

  • overviews and in-depth explanations of specific topics in a specific sequence,
  • examples that the learner finds personally relevant and interesting,
  • interactive practice – including open-ended exercises – with immediate, corrective feedback,
  • the structure of the relationships between ideas and concepts,
  • etc.

Also relevant/see:


.


Generating The Future of Education with AI — from aixeducation.com

AI in Education -- An online-based conference taking place on August 5-6, 2023

Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.


How Teachers Are Using ChatGPT in Class — from edweek.org by Larry Ferlazzo

Excerpt:

The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.


Japan emphasizes students’ comprehension of AI in new school guidelines — from japantimes.co.jp by Karin Kaneko; via The Rundown

Excerpt:

The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.

Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.


AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom — from cnet.com by Mark Serrels
Teachers and students are already harnessing the power of AI, with an eye toward the future.

Excerpt:

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.


The AI Dictionary for Educators — from blog.profjim.com

To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.


7 Strategies to Prepare Educators to Teach With AI — from edweek.org by Lauraine Langreo; NOTE: Behind paywall


 

Romans 12:10
Be devoted to one another in love. Honor one another above yourselves.

Proverbs 12:22
“The Lord detests lying lips, but he delights in people who are trustworthy.”

Psalms 65:3
When we were overwhelmed by sins, you forgave our transgressions.

Proverbs 13:10
Where there is strife, there is pride, but wisdom is found in those who take advice.

 

ChatGPT scams are the new crypto scams, Meta warns — from engadget.com by Karissa Bell
Meta plans to roll out new “Work Accounts” for businesses to guard against hacks.

Excerpt:

As the buzz around ChatGPT and other generative AI increases, so has scammers’ interest in the tech. In a new report published by Meta, the company says it’s seen a sharp uptick in malware disguised as ChatGPT and similar AI software.

In a statement, the company said that since March of 2023 alone, its researchers have discovered “ten malware families using ChatGPT and other similar themes to compromise accounts across the internet” and that it’s blocked more than 1,000 malicious links from its platform. According to Meta, the scams often involve mobile apps or browser extensions posing as ChatGPT tools. And while in some cases the tools do offer some ChatGPT functionality, their real purpose is to steal their users’ account credentials.

AI Is Reshaping the Battlefield and the Future of Warfare — from bloomberg.com by Jackie Davalos and Nate Lanxon
In this episode of AI IRL, Jackie Davalos and Nate Lanxon talk about one of the most dangerous applications of artificial intelligence: modern warfare

Excerpt:

Artificial intelligence has triggered an arms race with the potential to transform modern-day warfare. Countries are vying to develop cutting-edge technology at record speed, sparking concerns about whether we understand its power before it’s deployed.

From DSC:
I wish that humankind — especially those of us in the United States — would devote less money to warfare and more funding to education.

 

Fresh Voices on Legal Tech with Natalie Knowlton — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell

EPISODE NOTES

Technology has become the main driver for increasing access to justice, and there are huge opportunities for legal service providers to leverage both existing and emerging tech to reach new clients. Dennis and Tom welcome Natalie Knowlton to discuss the current state of legal services, the justice gap, and ways technology is helping attorneys provide better and more affordable services to consumers. As always, stay tuned for the parting shots, that one tip, website, or observation that you can use the second the podcast ends.

New report on ChatGPT & generative AI in law firms shows opportunities abound, even as concerns persist — from thomsonreuters.com; via Brainyacts #43

Excerpt:

The survey, conducted in late-March by the Thomson Reuters Institute, gathered insight from more than 440 respondent lawyers at large and midsize law firms in the United States, United Kingdom, and Canada. The survey forms the basis of a new report, ChatGPT & Generative AI within Law Firms, which takes a deep look at the evolving attitudes towards generative AI and ChatGPT within law firms, measuring awareness and adoption of the technology as well as lawyers’ views on its potential risks.

The report also reveals several key findings that deserve special attention from law firm leaders and other legal professionals as ChatGPT and generative AI evolve from concept to reality for the vast majority of the legal industry participants. These findings include:

    • Attitudes are evolving around this technology
    • Firms are taking a cautiously proactive approach
    • There’s a growing awareness of the risks

‘Legal Tech Lists’: 5 Lawyer Tropes That Were Upended By Legal Tech — from abovethelaw.com by Jared Correia
These common fictitious scenarios would be solved by technology.

Excerpt:

There are lots of tropes related to lawyers and law firms that frequently show up in works of fiction.  The thing is, those tropes are tropes because they’re sort of old; they’ve been around for a long time. Now, however, modern technology can solve a heck of a lot of those issues. So, for this edition of the “Reference Manual of Lists,” we’re going to relay a trope, offer an example, and talk about how legal tech actually fixes the problem today.

The Future of Generative Large Language Models and Potential Applications in LegalTech — from jdsupra.com by Johannes Scholtes and Geoffrey Vance

Excerpt:

If you made it this far, you should by now understand that ChatGPT is not by itself a search engine, nor an eDiscovery data reviewer, a translator, knowledge base, or tool for legal analytics. But it can contribute to these functionalities.

In-person vs. virtual ADR — How to choose? — from reuters.com by Eric Larson

Excerpt:

April 20, 2023 – Alternative dispute resolution (ADR), a common technique parties can use to settle disputes with the help of a third party, offers several unique benefits over traditional litigation. It is typically more cost-effective, confidential and generally a preferred method to resolving disputes. As a result, counsel and their clients often view ADR as a no-brainer. But the once simple decision to engage in ADR is now complicated by whether to proceed in-person, virtually or with a hybrid approach.

ChatGPT: A Lawyer’s Friend or Ethical Time Bomb? A Look at Professional Responsibility in the Age of AI — from jdsupra.com by Mitchell, Williams, Selig, Gates, & Woodyard

Excerpt:

The emergence of ChatGPT comes with tremendous promise of increased automation and efficiency. But at what cost? In this blog post, we’ll explore the potential ethical time bomb of using ChatGPT and examine the responsibility of lawyers in the age of AI.

 

 

‘Legal Tech Lists’: The 10 Most Significant Legal Tech Developments Since 2010 — from abovethelaw.com by Robert Ambrogi
Startups, AI, the cloud, and much more.

Excerpt:

Every year, I write a year-end wrap-up of the most significant developments in legal technology.

At the end of the past decade, I decided to look back on the most significant developments of the 2010s as a whole. It may well have been the most tumultuous decade ever in changing how legal services are delivered.

Here, I revisit those changes — and add a few post-2020 updates.

 
 
© 2022 | Daniel Christian