How new AI chatbots could help teachers with their toughest problems — from fastcompany.com by Greg Toppo
At least two groups are working on bots that would make peer-reviewed research, buried in expensive academic journals, accessible for everyday use.

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them.

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. Like many juveniles faced with questions they can’t answer, it has a frustrating tendency to make things up.

Using ChatGPT Custom Instructions for Fun and Profit — from every.to by Dan Shipper
How to 10x ChatGPT with personalized answers 

Then, I turned Custom Instructions on and gave it the same starting prompt. Here are a few of the responses (emphases are mine):

Purpose: What’s the primary goal of the course? Is it to augment your existing offerings at Every, serve as a standalone product, or something else?”

Content Depth: Given your love for detailed, intellectual writing, how in-depth will you go on each topic?”

Guest Contributions: Do you plan to involve guest experts, perhaps even some of your writing heroes or team members like Kate Lee and Evan Armstrong?”

Time Investment: Given that you’re already grappling with being too opportunistic and dutiful, how will you allocate time for this without sidelining your creative work or burdening yourself?”

AI for comics — from aiandacademia.substack.com by Bryan Alexander
Exploring an entertaining new Hugging Face application

As generative AI grows, more uses appear. Beyond text and images, other domains for creation pop up as companies, groups, and individuals try out new functions.

Today’s case in point is AI Comic Factory from Hugging Face.* This service will generate a couple of comic book pages based on your text input. It gives you choices of comic style and page layout as well.

Items from Tom Barrett’ Promptcraft: AI for a better learning ecosystem

This new AI video tool clones your voice in 7 languages — and it’s blowing up  — from sg.news.yahoo.com by Christoph Schwaiger

How many languages do you speak? Thanks to AI, that number could be as many as seven. Los Angeles-based AI video platform HeyGen has launched a new tool that clones your voice from a video and translates what you’re saying into seven different languages. If that wasn’t enough, it also syncs your lips to your new voice so the final clip looks (and sounds) as realistic as possible.

Microsoft and Project Gutenberg release over 5,000 free audiobooks — from the-decoder.com by Matthias Bastian

Microsoft and Project Gutenberg have used AI technologies to create more than 5,000 free audiobooks with high-quality synthetic voices.

For the project, the researchers combined advances in machine learning, automatic text selection (which texts are read aloud, which are not), and natural-sounding speech synthesis systems.

 

 

Generative A.I. + Law – Background, Applications and Use Cases Including GPT-4 Passes the Bar Exam – Speaker Deck — from speakerdeck.com by Professor Daniel Martin Katz

 

 

 


Also relevant/see:

AI-Powered Virtual Legal Assistants Transform Client Services — from abovethelaw.com by Olga V. Mack
They can respond more succinctly than ever to answer client questions, triage incoming requests, provide details, and trigger automated workflows that ensure lawyers handle legal issues efficiently and effectively.

Artificial Intelligence in Law: How AI Can Reshape the Legal Industry — from jdsupra.com

 

The Prompt #14: Your Guide to Custom Instructions — from noisemedia.ai by Alex Banks

Whilst we typically cover a single ‘prompt’ to use with ChatGPT, today we’re exploring a new feature now available to everyone: custom instructions.

You provide specific directions for ChatGPT leading to greater control of the output. It’s all about guiding the AI to get the responses you really want.

To get started:
Log into ChatGPT ? Click on your name/email bottom left corner ? select ‘Custom instructions’


Meet Zoom AI Companion, your new AI assistant! Unlock the benefits with a paid Zoom account — from blog.zoom.us by Smita Hashim

We’re excited to introduce you to AI Companion (formerly Zoom IQ), your new generative AI assistant across the Zoom platform. AI Companion empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.

Envision being able to interact with AI Companion through a conversational interface and ask for help on a whole range of tasks, similarly to how you would with a real assistant. You’ll be able to ask it to help prepare for your upcoming meeting, get a consolidated summary of prior Zoom meetings and relevant chat threads, and even find relevant documents and tickets from connected third-party applications with your permission.

From DSC:
You can ask AI Companion to catch you up on what you missed during a meeting in progress.”

And what if some key details were missed? Should you rely on this? I’d treat this with care/caution myself.



A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data — from fortune.com by Stephen Pastis (behind paywall)

That’s because, as it turns out, it’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. “hallucinations” and the difficulties of explaining certain A.I. outputs. 


More companies see ChatGPT training as a hot job perk for office workers — from cnbc.com by Mikaela Cohen

Key points:

  • Workplaces filled with artificial intelligence are closer to becoming a reality, making it essential that workers know how to use generative AI.
  • Offering specific AI chatbot training to current employees could be your next best talent retention tactic.
  • 90% of business leaders see ChatGPT as a beneficial skill in job applicants, according to a report from career site Resume Builder.

OpenAI Plugs ChatGPT Into Canva to Sharpen Its Competitive Edge in AI — from decrypt.co by Jose Antonio Lanz
Now ChatGPT Plus users can “talk” to Canva directly from OpenAI’s bot, making their workflow easier.

This strategic move aims to make the process of creating visuals such as logos, banners, and more, even more simple for businesses and entrepreneurs.

This latest integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design.


From DSC:
This Tweet addresses a likely component of our future learning ecosystems:


Large language models aren’t people. Let’s stop testing them as if they were. — from technologyreview.com by Will Douglas Heaven
With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”


We Analyzed Millions of ChatGPT User Sessions: Visits are Down 29% since May, Programming Assistance is 30% of Use — from sparktoro.com by Rand Fishkin

In concert with the fine folks at Datos, whose opt-in, anonymized panel of 20M devices (desktop and mobile, covering 200+ countries) provides outstanding insight into what real people are doing on the web, we undertook a challenging project to answer at least some of the mystery surrounding ChatGPT.



Crypto in ‘arms race’ against AI-powered scams — Quantstamp co-founder — from cointelegraph.com by Tom Mitchelhill
Quantstamp’s Richard Ma explained that the coming surge in sophisticated AI phishing scams could pose an existential threat to crypto organizations.

With the field of artificial intelligence evolving at near breakneck speed, scammers now have access to tools that can help them execute highly sophisticated attacks en masse, warns the co-founder of Web3 security firm Quantstamp.


 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

OpenAI angles to put ChatGPT in classrooms with special tutor prompts — from techcrunch.com by Devin Coldewey

Taking the bull by the horns, the company has proposed a few ways for teachers to put the system to use… outside its usual role as “research assistant” for procrastinating students.
.

Teaching with AI -- a guide from OpenAI


Q2 Earnings Roundup – EdTech Generative AI — from aieducation.substack.com by Claire Zau
A roundup of LLM and AI discussions from Q2 EdTech Earnings

In this piece, we’ll be breaking down how a few of edtech’s most important companies are thinking about AI developments.

  • Duolingo
  • Powerschool
  • Coursera
  • Docebo
  • Instructure
  • Nerdy
 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 

Excerpts from the Too Long Didn’t Read (TLDR) section from AIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)

* There was a lot of talk about learning bots. This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).

* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.

* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.

* No professor who spoke thought using a plagiarism detector was a good idea.

* Everyone thought that significant advancements in AI technology were inevitable.

* Almost everyone expressed being overwhelmed by the rate of change.


Stefan recommended the following resource:


 


Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 

AI for Education Webinars — from youtube.com by Tom Barrett and others

AI for education -- a webinar series by Tom Barrett and company


Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A simple, three-step guide on how to design assessments in a post-AI world

Excerpt:

Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.

They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.

Step 1 -- Write Inquiry-Based Objectives

.


Massive Disruption Now: What AI Means for Students, Educators, Administrators and Accreditation Boards
— from stefanbauschard.substack.com by Stefan Bauschard; via Will Richardson on LinkedIn
The choices many colleges and universities make regarding AI over the next 9 months will determine if they survive. The same may be true for schools.

Excerpts:

Just for a minute, consider how education would change if the following were true

  • AIs “hallucinated” less than humans
  • AIs could write in our own voices
  • AIs could accurately do math
  • AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
  • AIs could teach anything any student wanted or need to know any time of day or night
  • AIs could do this at a fraction of the cost of a human teacher or professor

Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920.  If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820.  AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”


On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander
A new practice with pedagogical possibilities

Excerpt:

Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?

How might this play out in a college or university class?

Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:


Generative Textbooks — from opencontent.org by David Wiley

Excerpt (emphasis DSC):

I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model. These prompts could help learners ask for things like:

  • overviews and in-depth explanations of specific topics in a specific sequence,
  • examples that the learner finds personally relevant and interesting,
  • interactive practice – including open-ended exercises – with immediate, corrective feedback,
  • the structure of the relationships between ideas and concepts,
  • etc.

Also relevant/see:


.


Generating The Future of Education with AI — from aixeducation.com

AI in Education -- An online-based conference taking place on August 5-6, 2023

Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.


How Teachers Are Using ChatGPT in Class — from edweek.org by Larry Ferlazzo

Excerpt:

The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.


Japan emphasizes students’ comprehension of AI in new school guidelines — from japantimes.co.jp by Karin Kaneko; via The Rundown

Excerpt:

The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.

Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.


AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom — from cnet.com by Mark Serrels
Teachers and students are already harnessing the power of AI, with an eye toward the future.

Excerpt:

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.


The AI Dictionary for Educators — from blog.profjim.com

To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.


7 Strategies to Prepare Educators to Teach With AI — from edweek.org by Lauraine Langreo; NOTE: Behind paywall


 

Law Firms Are Recruiting More AI Experts as Clients Demand ‘More for Less’ — from bloomberg.com by Irina Anghel
Data scientists, software engineers among roles being sought | Legal services seen as vulnerable to ChatGPT-type software

Excerpt (emphasis DSC):

Chatbots, data scientists, software engineers. As clients demand more for less, law firms are hiring growing numbers of staff who’ve studied technology not tort law to try and stand out from their rivals.

Law firms are advertising for experts in artificial intelligence “more than ever before,” says Chris Tart-Roberts, head of the legal technology practice at Macfarlanes, describing a trend he says began about six months ago.


Legal is the second industry with the highest potential for automation

.


AI Will Threaten Law Firm Jobs, But Innovators Will Thrive — from law.com

Excerpts:

What You Need to Know

  • Law firm leaders and consultants are unsure of how AI use will ultimately impact the legal workforce.
  • Consults are advising law firms and attorneys alike to adapt to the use of generative AI, viewing this as an opportunity for attorneys to learn new skills and law firms to take a look at their business models.

Split between foreseeing job cuts and opportunities to introduce new skills and additional efficiencies into the office, firm leaders and consultants remain uncertain about the impact of artificial intelligence on the legal workforce.

However, one thing is certain: law firms and attorneys need to adapt and learn how to integrate this new technology in their business models, according to consultants. 


AI Lawyer — A personal AI lawyer at your fingertips — from ailawyer.pro

AI Lawyer

From DSC:
I hope that we will see a lot more of this kind of thing!
I’m counting on it.
.


Revolutionize Your Legal Education with Law School AI — from law-school-ai.vercel.app
Your Ultimate Study Partner

Are you overwhelmed by countless cases, complex legal concepts, and endless readings? Law School AI is here to help. Our cutting-edge AI chatbot is designed to provide law students with an accessible, efficient, and engaging way to learn the law. Our chatbot simplifies complex legal topics, delivers personalized study guidance, and answers your questions in real-time – making your law school journey a whole lot easier.


Job title of the future: metaverse lawyer — from technologyreview.com by Amanda Smith
Madaline Zannes’s virtual offices come with breakout rooms, an art gallery… and a bar.
.

Excerpt:

Lot #651 on Somnium Space belongs to Zannes Law, a Toronto-based law firm. In this seven-level metaverse office, principal lawyer Madaline Zannes conducts private consultations with clients, meets people wandering in with legal questions, hosts conferences, and gives guest lectures. Zannes says that her metaverse office allows for a more immersive, imaginative client experience. She hired a custom metaverse builder to create the space from scratch—with breakout rooms, presentation stages, offices to rent, an art gallery, and a rooftop bar.


A Literal Generative AI Discussion: How AI Could Reshape Law — from geeklawblog.com by Greg Lambert

Excerpt:

Greg spoke with an AI guest named Justis for this episode. Justis, powered by OpenAI’s GPT-4, was able to have a natural conversation with Greg and provide insightful perspectives on the use of generative AI in the legal industry, specifically in law firms.

In the first part of their discussion, Justis gave an overview of the legal industry’s interest in and uncertainty around adopting generative AI. While many law firm leaders recognize its potential, some are unsure of how it fits into legal work or worry about risks. Justis pointed to examples of firms exploring AI and said letting lawyers experiment with the tools could help identify use cases.


Robots aren’t representing us in court but here are 7 legal tech startups transforming the legal system — from tech.eu by Cate Lawrence
Legal tech startups are stepping up to the bar, using tech such as AI, teleoperations, and apps to bring justice to more people than ever before. This increases efficiency, reduces delays, and lowers costs, expanding legal access.


Putting Humans First: Solving Real-Life Problems With Legal Innovation — from abovethelaw.com by Olga Mack
Placing the end-user at the heart of the process allows innovators to identify pain points and create solutions that directly address the unique needs and challenges individuals and businesses face.

 

AI21 Labs concludes largest Turing Test experiment to date — from ai21.com
As part of an ongoing social and educational research project, AI21 Labs is thrilled to share the initial results of what has now become the largest Turing Test in history by scale.
.

People found it easier to identify a fellow human. When talking to humans, participants guessed right in 73% of the cases. When talking to bots, participants guessed right in just 60% of the cases.

 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 

The perils of consulting an Electric Monk — from jordanfurlong.substack.com by Jordan Furlong
Don’t blame ChatGPT for the infamous incident of the made-up cases. And don’t be too hard on the lawyer, either. We’re all susceptible to a machine that tells us exactly what we want to hear.

Excerpt:

But then the “ChatGPT Lawyer” story happened, and all hell broke loose on LawTwitter and LawLinkedIn, and I felt I needed to make three points, one of which involves an extra-terrestrial robot.

My first two points are pretty straightforward:

  1. The tsunami of gleeful overreaction from lawyers on social media, urging bans on the use of ChatGPT and predicting prison time for the hapless practitioner, speaks not only to their fear and loathing of generative AI, but also to their desperate hope that it’s all really nothing but hype and won’t disturb their happy status quo. Good luck with that.
  2. The condemnation and mockery of the lawyer himself, who made a bad mistake but who’s been buried by an utterly disproportionate avalanche of derision, speaks to the lack of compassion in this profession, whose members should pray that their worst day as a lawyer never makes it to the front page of The New York Times. There but for the grace of God.

Are you looking for evidence to support the side that’s hired you? Or are you looking for the truth? Choosing the first option has never been easier. It’s also never been more dangerous.


As referenced topic-wise by Jordan above, also see:

A lawyer used ChatGPT to prepare a court filing. It went horribly awry. — from cbsnews.com by Megan Cerullo


What I learned at CLOC 2023 — from alexofftherecord.com by Alex Su
This week I attended the premier legal operations conference. Here’s what I heard.

Excerpt:

Theme 1: Generative AI isn’t going anywhere
This was a huge theme throughout the conference. Whether it was vendors announcing GPT integrations, or panels discussing how to use AI, there was just an enormous amount of attention on generative AI. I’m certainly no stranger to all this hype, but I’d always wondered if it was all from my Silicon Valley bubble. It wasn’t.

What was driving all this interest in AI? Well, the ubiquity of ChatGPT. Everyone’s talking about it and trying to figure out how to incorporate it into the business. And not just in the U.S. It’s a worldwide trend. Word on the street is that it’s a CEO-level priority. Everywhere. So naturally it trickles down to the legal department.


We need to talk about ChatGPT — from mnbar.org by Damien Riehl

Excerpt:

How well do LLMs perform on legal tasks? 

Personal experience and anecdotal evidence indicate that LLMs’ current state provides impressive output in various legal tasks. Specifically, they provide extraordinary results on the following:

  • Drafting counterarguments.
  • Exploring client fact inquiries (e.g., “How did you lose money?”).
  • Ideating voir dire questions (and rating responses).
  • Summarizing statutes.
  • Calculating works’ copyright expiration.
  • Drafting privacy playbooks.
  • Drafting motions to dismiss.
  • Responding to cease-and-desist letters.
  • Crafting decision trees.
  • Creating chronologies.
  • Drafting contracts.
  • Extracting key elements from depositions.

 

 

Corporate legal departments see use cases for generative AI & ChatGPT, new report finds — from thomsonreuters.com


New legal tech tools showcased at CLOC 2023 — from legaldive.comRobert Freedman
Innovations include a better way to evaluate law firm proposals, centralize all in-house legal requests in a single intake function and analyze agreements.

Guest post: CLOC 2023 – Key insights into how to drive value during changing economic times — from legaltechnology.com by Valerie Chan

Excerpt:

Typically, Legalweek has always been focused on eDiscovery, while CLOC was focused on matter management and contracts management. This time I noticed more balance in the vendor hall and sessions, with a broader range of services providers than before, including staffing providers, contracts management vendors and other new entrants in addition to eDiscovery vendors.

One theme dominated the show floor conversations: Over and over, the legal operators I talked with said if their technologies and vendors were able to develop better workflows, achieve more cost savings and report on the metrics that mattered to their GC, the GC could function as more of a business advisor to the C-suite.


AI is already being used in the legal system—we need to pay more attention to how we use it — by phys.org Morgiane Noel

Excerpt:

While ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is law.

The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it’s one we now need to give serious consideration to.

That’s because it raises questions about the compatibility of AI with conducting fair trials. The EU has enacted legislation designed to govern how AI can and can’t be used in criminal law.


Legal Innovation as a Service, Now Enhanced with AI — from denniskennedy.com by Dennis Kennedy

Excerpt:

Over the last semester, I’ve been teaching two classes at Michigan State University College of Law, one called AI and the Law and the other called New Technologies and the Law, and a class at University of Michigan Law School called Legal Technology Literacy and Leadership. All three classes pushed me to keep up-to-date with the nearly-daily developments in AI, ChatGPT, and LLMs. I also did quite a lot of experiments, primarily with ChatGPT, especially GPT-4, and with Notion AI.


Emerging Tech Trends: The rise of GPT tools in contract analysis — from abajournal.com by Nicole Black

Excerpt:

Below, you’ll learn about many of the solutions currently available. Keep in mind that this overview is not exhaustive. There are other similar tools currently available and the number of products in this category will undoubtedly increase in the months to come.


Politicians need to learn how AI works—fast — link.wired.com

Excerpt:

This week we’ll hear from someone who has deep experience in assessing and regulating potentially harmful uses of automation and artificial intelligence—valuable skills at a moment when many people, including lawmakers, are freaking out about the chaos that the technology could cause.


 

 
© 2024 | Daniel Christian