The 2024 Lawdragon 100 Leading AI & Legal Tech Advisors — from lawdragon.com by Katrina Dewey

These librarians, entrepreneurs, lawyers and technologists built the world where artificial intelligence threatens to upend life and law as we know it – and are now at the forefront of the battles raging within.

To create this first-of-its-kind guide, we cast a wide net with dozens of leaders in this area, took submissions, consulted with some of the most esteemed gurus in legal tech. We also researched the cases most likely to have the biggest impact on AI, unearthing the dozen or so top trial lawyers tapped to lead the battles. Many of them bring copyright or IP backgrounds and more than a few are Bay Area based. Those denoted with an asterisk are members of our Hall of Fame.
.


Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions — from lawnext.com by Bob Ambrogi

descrybe.ai, a year-old legal research startup focused on using artificial intelligence to provide free and easy access to court opinions, has completed its goal of creating AI-generated summaries of all available state supreme and appellate court opinions from throughout the United States.

descrybe.ai describes its mission as democratizing access to legal information and leveling the playing field in legal research, particularly for smaller-firm lawyers, journalists, and members of the public.


 


[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

Also see:

Cognition Labs Blob

 

LinkedIn Learning: Workplace Learning Report 2024 — learning.linkedin.com

L&D powers the AI future

The AI era is here, and leaders across learning and talent development have a new mandate: help people and organizations rise to opportunity with speed and impact. 

As AI reshapes how people learn, work, and chart their careers, L&D sits at the center of organizational agility, delivering business innovation and critical skills. This report combines survey results, LinkedIn behavioral data, and wisdom from L&D pros around the globe to help you rewrite your playbook for the future of work. 

 

Amid explosive demand, America is running out of power — from washingtonpost.com by Evan Halper
AI and the boom in clean-tech manufacturing are pushing America’s power grid to the brink. Utilities can’t keep up.

Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.

A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.


The Obscene Energy Demands of A.I. — from newyorker.com by Elizabeth Kolbert
How can the world reach net zero if it keeps inventing new ways to consume energy?

“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”


A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman
The generative AI payoff may only come when companies do deeper organizational surgery on their business.

  • Figure out where gen AI copilots can give you a real competitive advantage
  • Upskill the talent you have but be clear about the gen-AI-specific skills you need
  • Form a centralized team to establish standards that enable responsible scaling
  • Set up the technology architecture to scale
  • Ensure data quality and focus on unstructured data to fuel your models
  • Build trust and reusability to drive adoption and scale

AI Prompt Engineering Is Dead Long live AI prompt engineering — from spectrum.ieee.org

Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.

However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.


What the birth of the spreadsheet teaches us about generative AI — from timharford.com by Tim Harford; via Sam DeBrule

There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?

It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.


 

 

A Notre Dame Senior’s Perspective on AI in the Classroom — from learning.nd.edu — by Sarah Ochocki; via Derek Bruff on LinkedIn

At this moment, as a college student trying to navigate the messy, fast-developing, and varied world of generative AI, I feel more confused than ever. I think most of us can share that feeling. There’s no roadmap on how to use AI in education, and there aren’t the typical years of proof to show something works. However, this promising new tool is sitting in front of us, and we would be foolish to not use it or talk about it.

I’ve used it to help me understand sample code I was viewing, rather than mindlessly trying to copy what I was trying to learn from. I’ve also used it to help prepare for a debate, practicing making counterarguments to the points it came up with.

AI alone cannot teach something; there needs to be critical interaction with the responses we are given. However, this is something that is true of any form of education. I could sit in a lecture for hours a week, but if I don’t do the homework or critically engage with the material, I don’t expect to learn anything.


A Map of Generative AI for Education — from medium.com by Laurence Holt; via GSV
An update to our map of the current state-of-the-art


Last ones (for now):


Survey: K-12 Students Want More Guidance on Using AI — from govtech.com by Lauraine Langreo
Research from the nonprofit National 4-H Council found that most 9- to 17-year-olds have an idea of what AI is and what it can do, but most would like help from adults in learning how to use different AI tools.

“Preparing young people for the workforce of the future means ensuring that they have a solid understanding of these new technologies that are reshaping our world,” Jill Bramble, the president and CEO of the National 4-H Council, said in a press release.

AI School Guidance Document Toolkit, with Free Comprehensive Review — from tefanbauschard.substack.com by Stefan Bauschard and Dr. Sabba Quidwai

 

From DSC:
I have had two instances recently where the phone-based systems (i.e., the Voice Response Units) haven’t worked…at all. They either wouldn’t let me do something as simple as updating my credit card number on file or checking on the status of a prescription. Human beings had to get involved to help me get the issues resolved. (Sounds a bit like the recent issues with the FAFSA forms, as I think about it.)

This is old hat, I know. This is common knowledge. But with AI, I’m increasingly concerned that the temptations are there for the MBAs/executives out there to lay off employees and boost their short-term profits (so that Wall Street will reward them and so that they can get their year-end bonuses).

The reminder/lesson for businesses and organizations of all types (including colleges and universities):

  • Unless you want to piss off and lose your customers, always allow your customers to stop using a VRU and go directly to a person that they can talk to.
  • Then empower those employees on the front lines as much as possible so that they can get the issues resolved for your customers.
  • Don’t think you are putting your MBA to good use by laying off your employees after you implement some new VRU system or AI-backed system. Don’t be too quick to think that you’re going to save all kinds of money by going with AI. This might be the case down the line, but I wouldn’t be too quick to get there yet. And even when you do get there, please allow us to talk to human beings.
 

How to Use AI Effectively Throughout Your Job Search — from higheredjobs.com by Alison Herget

Some recruiters and hiring managers have reported that job seekers’ use of AI has caused floods of applications to come in with formulaic versions of resumes and cover letters all seemingly produced by chatbots, she said.

So, if you’re looking for your next role, here are some tips for engaging in an AI-powered job search that will get you noticed by employers in the right way.

 

Hotshot, the Legal Learning Platform, Releases First Five in Planned Series of AI Training Videos for Lawyers — from lawnext.com by Bob Ambrogi

Hotshot, a learning platform for legal professionals, today released the first five courses in a planned series designed to teach lawyers and other legal professionals about artificial intelligence and its impact on law practice.

The overall set of AI videos is designed to teach lawyers about the technology, its use cases for law practice, its risks and ethical considerations, its impact on different practice areas, and more.

 

Accenture to acquire Udacity to build a learning platform focused on AI — from techcrunch.com by Ron Miller

Excerpt: (emphasis DSC):

Accenture announced today that it would acquire the learning platform Udacity as part of an effort to build a learning platform focused on the growing interest in AI. While the company didn’t specify how much it paid for Udacity, it also announced a $1 billion investment in building a technology learning platform it’s calling LearnVantage.

“The rise of generative AI represents one of the most transformative changes in how work gets done and is driving a growing need for enterprises to train and upskill people in cloud, data and AI as they build their digital core and reinvent their enterprises,” Kishore Durg, global lead of Accenture LearnVantage said in a statement.

.


.

 

AI Is Becoming an Integral Part of the Instructional Design Process — from drphilippahardman.substack.com Dr. Philippa Hardman
What I learned from 150 interviews with instructional designers

1. AI already plays a central part in the instructional design process
A whopping 95.3% of the instructional designers interviewed said they use AI in their day to day work. Those who don’t use AI cite access or permission issues as the primary reason that they haven’t integrated AI into their process.

2. AI is predominantly used at the design and development stages of the instructional design process 
When mapped to the ADDIE process, the breakdown of use cases goes as follows:

  • Analysis: 5.5% of use cases
  • Design: 32.1%
  • Development: 53.2%
  • Implementation: 1.8%
  • Evaluation: 7.3%

Speaking of AI in our learning ecosystems, also see:


Will AI Use in Schools Increase Next Year? 56 Percent of Educators Say Yes — from edweek.org by Alyson Klein

The majority of educators expect use of artificial intelligence tools will increase in their school or district over the next year, according to an EdWeek Research Center survey.

Applying Multimodal AI to L&D — from learningguild.com by Sarah Clark
We’re just starting to see Multimodal AI systems hit the spotlight. Unlike the text-based and image-generation AI tools we’ve seen before, multimodal systems can absorb and generate content in multiple formats – text, image, video, audio, etc.

 

 

How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

Using Generative AI throughout the Institution — from aiedusimplified.substack.com by Lance Eaton
8 lightning talk on generative AI and how to use it through higher education


The magic of AI to help educators with saving time. — from magicschool.ai; via Mrs. Kendall Sajdak


Getting Better Results out of Generative AI — from aiedusimplified.substack.com by Lance Eaton
The prompt to use before you prompt generative AI

Last month, I discussed a GPT that I had created around enhancing prompts. Since then, I have been actively using my Prompt Enhancer GPT to much more effective outputs. Last week, I did a series of mini-talks on generative AI in different parts of higher education (faculty development, human resources, grants, executive leadership, etc) and structured it as “5 tips”. I included a final bonus tip in all of them—a tip that I heard from many afterwards was probably the most useful tip—especially because you can only access the Prompt Enhancer GPT if you are paying for ChatGPT.


Exploring the Opportunities and Challenges with Generative AI — from er.educause.edu by Veronica Diaz

Effectively integrating generative AI into higher education requires policy development, cross-functional engagement, ethical principles, risk assessments, collaboration with other institutions, and an exploration of diverse use cases.


Creating Guidelines for the Use of Gen AI Across Campus — from campustechnology.com by Rhea Kelly
The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK’s Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.

That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We’re also looking at guidelines for researchers at UK, and we’re currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.


From Mean Drafts to Keen Emails — from automatedteach.com by Graham Clay

My experiences match with the results of the above studies. The second study cited above found that 83% of those students who haven’t used AI tools are “not interested in using them,” so it is no surprise that many students have little awareness of their nature. The third study cited above found that, “apart from 12% of students identifying as daily users,” most students’ use cases were “relatively unsophisticated” like summarizing or paraphrasing text.

For those of us in the AI-curious bubble, we need to continually work to stay current, but we also need to recognize that what we take to be “common knowledge” is far from common outside of the bubble.


What do superintendents need to know about artificial intelligence? — from k12dive.com by Roger Riddell
District leaders shared strategies and advice on ethics, responsible use, and the technology’s limitations at the National Conference on Education.

Despite general familiarity, however, technical knowledge shouldn’t be assumed for district leaders or others in the school community. For instance, it’s critical that any materials related to AI not be written in “techy talk” so they can be clearly understood, said Ann McMullan, project director for the Consortium for School Networking’s EmpowerED Superintendents Initiative.

To that end, CoSN, a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology.


 

Generative AI’s environmental costs are soaring — and mostly secret — from nature.com by Kate Crawfold
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.


Get ready for the age of sovereign AI | Jensen Huang interview— from venturebeat.com by Dean Takahashi

Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.

He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.



Yikes, Google — from theneurondaily.com by Noah Edelman
PLUS: racially diverse nazis…WTF?!

Google shoots itself in the foot.
Last week was the best AND worst week for Google re AI.

The good news is that its upcoming Gemini 1.5 Pro model showcases remarkable capabilities with its expansive context window (details forthcoming).

The bad news is Google’s AI chatbot “Gemini” is getting A LOT of heat after generating some outrageous responses. Take a look:

Also from the Daily:

  • Perplexity just dropped this new podcast, Discover Daily, that recaps the news in 3-4 minutes.
  • It already broke into the top #200 news pods within a week.
  • AND it’s all *100% AI-generated*.

Daily Digest: It’s Nvidia’s world…and we’re just living in it. — from bensbites.beehiiv.com

  • Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
  • Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
  • The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
  • A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
  • Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.

Gemma: Introducing new state-of-the-art open models  — from blog.google


 

 

From DSC:
This first item is related to the legal field being able to deal with today’s issues:

The Best Online Law School Programs (2024) — from abovethelaw.com by Staci Zaretsky
A tasty little rankings treat before the full Princeton Review best law schools ranking is released.

Several law schools now offer online JD programs that have become as rigorous as their on-campus counterparts. For many JD candidates, an online law degree might even be the smarter choice. Online programs offer flexibility, affordability, access to innovative technologies, students from a diversity of career backgrounds, and global opportunities.

Voila! Feast your eyes upon the Best Online JD Programs at Law School for 2024 (in alphabetical order):

  • Mitchell Hamline School of Law – Hybrid J.D.
  • Monterey College of Law – Hybrid Online J.D.
  • Purdue Global Law School – Online J.D.
  • Southwestern Law School – Online J.D.
  • Syracuse University – J.D. Interactive
  • University of Dayton School of Law – Online Hybrid J.D.
  • University of New Hampshire – Hybrid J.D.

DSC: FINALLY!!! Online learning hits law schools (at least in a limited fashion)!!! Maybe there’s hope yet for the American Bar Association and for America’s legal system to be able to deal with the emerging technologies — and the issues presented therein — in the 21st century!!! Because if we can’t even get caught up to where numerous institutions of higher education were back at the turn of this century, we don’t have as much hope in the legal field being able to address things like AI, XR, cryptocurrency, blockchain, and more.


Meet KL3M: the first Legal Large Language Model. — from 273ventures.com
KL3M is the first model family trained from scratch on clean, legally-permissible data for enterprise use.


Advocate, advise, and accompany — from jordanfurlong.substack.com by Jordan Furlong
These are the three essential roles lawyers will play in the post-AI era. We need to start preparing legal education, lawyer licensing, and law practices to adapt.

Consider this scenario:

Ten years from now, Generative AI has proven capable of a stunning range of legal activities. Not only can it accurately write legal documents and conduct legal research and apply law to facts, it can reliably oversee legal document production, handle contract negotiations, monitor regulatory compliance, render legal opinions, and much more. Lawyers are no longer needed to carry out these previously billable tasks or even to double-check the AI’s performance. Tasks that once occupied 80% of lawyers’ billable time have been automated.

What are the chances this scenario unfolds within the next ten years? You can decide that likelihood for yourself, but I think anything above 1% represents the potential for major disruption to the legal profession.

Also from Jordan, see:


Top 5 Strategies to Excel in the 2024 Legal Sector with Colin Levy — from discrepancyai.com by Lisen Kaci

We have gathered, from Colin Levy’s insights, the top five strategies that legal professionals can implement to excel in this transformational era – bringing them together with technology.


Legal Tech’s Predictions for AI, Workflow Automation, and Data Analytics in 2024 — from jdsupra.com by Mitratech Holdings, Inc.

They need information like:

  • Why did we go over budget?
  • Why did we go to trial?
  • How many invoices sat with each attorney?

Going further than just legal spend, analytics on volume of work and diversity metrics can help legal teams make the business case they need to drive important initiatives and decisions forward. And a key differentiator of top-performing companies is the ability to get all of this data in one place, which is why Mitratech was thrilled to unveil PlatoBI, an embedded analytics platform powered by Snowflake, earlier this year with several exciting AI and Analytic enhancements.


DOJ appoints first-ever chief AI officer – Will law firms follow? — from legaltechnology.com by Emma Griffiths


AI’s promise and problem for law and learning — from reuters.com by John Bandler

Also worrisome is that AI will be used as a crutch that short circuits learning. Some people look for shortcuts. What effect of AI on that learning process and the result, for students and when lawyers use AI to draft documents and research?


 
© 2024 | Daniel Christian