How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

Using Generative AI throughout the Institution — from aiedusimplified.substack.com by Lance Eaton
8 lightning talk on generative AI and how to use it through higher education


The magic of AI to help educators with saving time. — from magicschool.ai; via Mrs. Kendall Sajdak


Getting Better Results out of Generative AI — from aiedusimplified.substack.com by Lance Eaton
The prompt to use before you prompt generative AI

Last month, I discussed a GPT that I had created around enhancing prompts. Since then, I have been actively using my Prompt Enhancer GPT to much more effective outputs. Last week, I did a series of mini-talks on generative AI in different parts of higher education (faculty development, human resources, grants, executive leadership, etc) and structured it as “5 tips”. I included a final bonus tip in all of them—a tip that I heard from many afterwards was probably the most useful tip—especially because you can only access the Prompt Enhancer GPT if you are paying for ChatGPT.


Exploring the Opportunities and Challenges with Generative AI — from er.educause.edu by Veronica Diaz

Effectively integrating generative AI into higher education requires policy development, cross-functional engagement, ethical principles, risk assessments, collaboration with other institutions, and an exploration of diverse use cases.


Creating Guidelines for the Use of Gen AI Across Campus — from campustechnology.com by Rhea Kelly
The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK’s Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.

That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We’re also looking at guidelines for researchers at UK, and we’re currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.


From Mean Drafts to Keen Emails — from automatedteach.com by Graham Clay

My experiences match with the results of the above studies. The second study cited above found that 83% of those students who haven’t used AI tools are “not interested in using them,” so it is no surprise that many students have little awareness of their nature. The third study cited above found that, “apart from 12% of students identifying as daily users,” most students’ use cases were “relatively unsophisticated” like summarizing or paraphrasing text.

For those of us in the AI-curious bubble, we need to continually work to stay current, but we also need to recognize that what we take to be “common knowledge” is far from common outside of the bubble.


What do superintendents need to know about artificial intelligence? — from k12dive.com by Roger Riddell
District leaders shared strategies and advice on ethics, responsible use, and the technology’s limitations at the National Conference on Education.

Despite general familiarity, however, technical knowledge shouldn’t be assumed for district leaders or others in the school community. For instance, it’s critical that any materials related to AI not be written in “techy talk” so they can be clearly understood, said Ann McMullan, project director for the Consortium for School Networking’s EmpowerED Superintendents Initiative.

To that end, CoSN, a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology.


 

Generative AI’s environmental costs are soaring — and mostly secret — from nature.com by Kate Crawfold
First-of-its-kind US bill would address the environmental costs of the technology, but there’s a long way to go.

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.

I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.


Get ready for the age of sovereign AI | Jensen Huang interview— from venturebeat.com by Dean Takahashi

Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.

He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.



Yikes, Google — from theneurondaily.com by Noah Edelman
PLUS: racially diverse nazis…WTF?!

Google shoots itself in the foot.
Last week was the best AND worst week for Google re AI.

The good news is that its upcoming Gemini 1.5 Pro model showcases remarkable capabilities with its expansive context window (details forthcoming).

The bad news is Google’s AI chatbot “Gemini” is getting A LOT of heat after generating some outrageous responses. Take a look:

Also from the Daily:

  • Perplexity just dropped this new podcast, Discover Daily, that recaps the news in 3-4 minutes.
  • It already broke into the top #200 news pods within a week.
  • AND it’s all *100% AI-generated*.

Daily Digest: It’s Nvidia’s world…and we’re just living in it. — from bensbites.beehiiv.com

  • Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
  • Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
  • The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
  • A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
  • Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.

Gemma: Introducing new state-of-the-art open models  — from blog.google


 

 

From DSC:
This first item is related to the legal field being able to deal with today’s issues:

The Best Online Law School Programs (2024) — from abovethelaw.com by Staci Zaretsky
A tasty little rankings treat before the full Princeton Review best law schools ranking is released.

Several law schools now offer online JD programs that have become as rigorous as their on-campus counterparts. For many JD candidates, an online law degree might even be the smarter choice. Online programs offer flexibility, affordability, access to innovative technologies, students from a diversity of career backgrounds, and global opportunities.

Voila! Feast your eyes upon the Best Online JD Programs at Law School for 2024 (in alphabetical order):

  • Mitchell Hamline School of Law – Hybrid J.D.
  • Monterey College of Law – Hybrid Online J.D.
  • Purdue Global Law School – Online J.D.
  • Southwestern Law School – Online J.D.
  • Syracuse University – J.D. Interactive
  • University of Dayton School of Law – Online Hybrid J.D.
  • University of New Hampshire – Hybrid J.D.

DSC: FINALLY!!! Online learning hits law schools (at least in a limited fashion)!!! Maybe there’s hope yet for the American Bar Association and for America’s legal system to be able to deal with the emerging technologies — and the issues presented therein — in the 21st century!!! Because if we can’t even get caught up to where numerous institutions of higher education were back at the turn of this century, we don’t have as much hope in the legal field being able to address things like AI, XR, cryptocurrency, blockchain, and more.


Meet KL3M: the first Legal Large Language Model. — from 273ventures.com
KL3M is the first model family trained from scratch on clean, legally-permissible data for enterprise use.


Advocate, advise, and accompany — from jordanfurlong.substack.com by Jordan Furlong
These are the three essential roles lawyers will play in the post-AI era. We need to start preparing legal education, lawyer licensing, and law practices to adapt.

Consider this scenario:

Ten years from now, Generative AI has proven capable of a stunning range of legal activities. Not only can it accurately write legal documents and conduct legal research and apply law to facts, it can reliably oversee legal document production, handle contract negotiations, monitor regulatory compliance, render legal opinions, and much more. Lawyers are no longer needed to carry out these previously billable tasks or even to double-check the AI’s performance. Tasks that once occupied 80% of lawyers’ billable time have been automated.

What are the chances this scenario unfolds within the next ten years? You can decide that likelihood for yourself, but I think anything above 1% represents the potential for major disruption to the legal profession.

Also from Jordan, see:


Top 5 Strategies to Excel in the 2024 Legal Sector with Colin Levy — from discrepancyai.com by Lisen Kaci

We have gathered, from Colin Levy’s insights, the top five strategies that legal professionals can implement to excel in this transformational era – bringing them together with technology.


Legal Tech’s Predictions for AI, Workflow Automation, and Data Analytics in 2024 — from jdsupra.com by Mitratech Holdings, Inc.

They need information like:

  • Why did we go over budget?
  • Why did we go to trial?
  • How many invoices sat with each attorney?

Going further than just legal spend, analytics on volume of work and diversity metrics can help legal teams make the business case they need to drive important initiatives and decisions forward. And a key differentiator of top-performing companies is the ability to get all of this data in one place, which is why Mitratech was thrilled to unveil PlatoBI, an embedded analytics platform powered by Snowflake, earlier this year with several exciting AI and Analytic enhancements.


DOJ appoints first-ever chief AI officer – Will law firms follow? — from legaltechnology.com by Emma Griffiths


AI’s promise and problem for law and learning — from reuters.com by John Bandler

Also worrisome is that AI will be used as a crutch that short circuits learning. Some people look for shortcuts. What effect of AI on that learning process and the result, for students and when lawyers use AI to draft documents and research?


 

Adobe Brings Conversational AI to Trillions of PDFs with the New AI Assistant in Reader and Acrobat — from news.adobe.com; via AI Secret

.

SAN JOSE, Calif. – [On 2/20/23], Adobe (Nasdaq:ADBE) introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat.

Simply open Reader or Acrobat and start working with the new capabilities, including:

  • AI Assistant: AI Assistant recommends questions based on a PDF’s content and answers questions about what’s in the document – all through an intuitive conversational interface.
  • Generative summary: Get a quick understanding of the content inside long documents with short overviews in easy-to-read formats.
  • Intelligent citations: Adobe’s custom attribution engine and proprietary AI generate citations so customers can easily verify the source of AI Assistant’s answers.
  • Easy navigation:
  • Formatted output:
  • Respect for customer data:  
  • Beyond PDF: Customers can use AI Assistant with all kinds of document formats (Word, PowerPoint, meeting transcripts, etc.)

Along these lines, also see:


5 ways Sora AI will change the creator economy and how to take advantage of that — from techthatmatters.beehiiv.com by Harsh Makadia

Essential skills to thrive with Sora AI
The realm of video editing isn’t about cutting and splicing.

A Video Editor should learn a diverse set of skills to earn money, such as:

  • Prompt Writing
  • Software Mastery
  • Problem-solving skills
  • Collaboration and communication skills
  • Creative storytelling and visual aesthetics

Invest in those skills that give you a competitive edge.


The text file that runs the internet — from theverge.com by David Pierce
For decades, robots.txt governed the behavior of web crawlers. But as unscrupulous AI companies seek out more and more data, the basic social contract of the web is falling apart. 


 

AI fast-tracks research to find battery material that halves lithium use — from inavateonthenet.net

Using AI, the team was able to plow through 32.6 million possible battery materials in 80 hours, a task the team estimates would have taken them 20 years to do.


Other interesting items from inavateonthenet.net:

Medical ‘hologram’ market to reach 6.8 bn by 2029

Providing audio for open spaces

 

Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)

We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.

Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Along these lines, also see:

Pika; via Superhuman AI



An Ivy League school just announced its first AI degree — from qz.com by Michelle Cheng; via Barbara Anna Zielonka on LinkedIn
It’s a sign of the times. At the same time, AI talent is scarce

At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.

What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.



Google Pumps $27 Million Into AI Training After Microsoft Pledge—Here’s What To Know — from forbes.com by Robert Hart

Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.


The Exhausting Pace of AI: Google’s Ultra Leap — from marcwatkins.substack.com by Marc Watkins

The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.

The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.

 

 

 


Maryville Announces $21 Million Investment in AI and New Technologies Amidst Record Growth — from maryville.edu; via Arthur “Art” Fredrich on LinkedIn

[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.

 

 

Announcing the 2024 GSV 150: The Top Growth Companies in Digital Learning & Workforce Skills — from prnewswire.com with information provided by ASU+GSV Summit

“The world is adapting to seismic shifts from generative AI,” says Luben Pampoulov, Partner at GSV Ventures. “AI co-pilots, AI tutors, AI content generators—AI is ubiquitous, and differentiation is increasingly critical. This is an impressive group of EdTech companies that are leveraging AI and driving positive outcomes for learners and society.”

Workforce Learning comprises 34% of the list, K-12 29%, Higher Education 24%, Adult Consumer Learning 10%, and Early Childhood 3%. Additionally, 21% of the companies stretch across two or more “Pre-K to Gray” categories. A broader move towards profitability is also evident: the collective gross and EBITDA margin score of the 2024 cohort increased 5% compared to 2023.

See the list at https://www.asugsvsummit.com/gsv-150

Selected from 2,000+ companies around the world based on revenue scale, revenue growth, user reach, geographic diversification, and margin profile, this impressive group is reaching an estimated 3 billion people and generating an estimated $23 billion in revenue.

 

From DSC:
This would be huge for all of our learning ecosystems, as the learning agents could remember where a particular student or employee is at in terms of their learning curve for a particular topic.


Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs — from blogs.nvidia.com
Tech demo gives anyone with an RTX GPU the power of a personalized GPT chatbot.



 

Generative AI in a Nutshell – how to survive and thrive in the age of AI — from youtube.com by Henrik Kniberg; via Robert Gibson and Adam Garry on LinkedIn


Lawless superintelligence: Zero evidence that AI can be controlled — from earth.com by Eric Ralls

In the realm of technological advancements, artificial intelligence (AI) stands out as a beacon of immeasurable potential, yet also as a source of existential angst when considering that AI might already be beyond our ability to control.

Dr. Roman V. Yampolskiy, a leading figure in AI safety, shares his insights into this dual-natured beast in his thought-provoking work, “AI: Unexplainable, Unpredictable, Uncontrollable.”

His research underscores a chilling truth: our current understanding and control of AI are woefully inadequate, posing a threat that could either lead to unprecedented prosperity or catastrophic extinction.


From DSC:
This next item is for actors, actresses, and voiceover specialists:

Turn your voice into passive income. — from elevenlabs.io; via Ben’s Bites
Are you a professional voice actor? Sign up and share your voice today to start earning rewards every time it’s used.


 

 

Better Call GPT, Comparing Large Language Models Against Lawyers — from arxiv.org by Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, & Rivindu Perera; via Azeem Azhar and Chantal Smith

This paper presents a groundbreaking comparison between Large Language Models (LLMs) and traditional legal contract reviewers—Junior Lawyers and Legal Process Outsourcers (LPOs). We dissect whether LLMs can outperform humans in accuracy, speed, and cost-efficiency during contract review. Our empirical analysis benchmarks LLMs against a ground truth set by Senior Lawyers, uncovering that advanced models match or exceed human accuracy in determining legal issues. In speed, LLMs complete reviews in mere seconds, eclipsing the hours required by their human counterparts. Cost-wise, LLMs operate at a fraction of the price, offering a staggering 99.97 percent reduction in cost over traditional methods. These results are not just statistics—they signal a seismic shift in legal practice. LLMs stand poised to disrupt the legal industry, enhancing accessibility and efficiency of legal services. Our research asserts that the era of LLM dominance in legal contract review is upon us, challenging the status quo and calling for a reimagined future of legal workflows.


Unraveling the Legal Tech Ecosystem: How People, Processes, and Tech Work Together — from legaltalknetwork.com by Colin Levy, JoAnn Hathaway, and Molly Ranns

Technology can help you solve problems in your law firm, connect with clients, save time and money, and so much more. But, how do you know what tech will work best for your practice? Molly Ranns and JoAnn Hathaway talk with Colin Levy about understanding and utilizing technology, common mistakes in choosing new tools, and ways to overcome tech-related fear and anxiety.


Legalweek Roundup: Top Takeaways for Litigation Teams — from jdsupra.com

Every time we attend Legalweek, we have a unique opportunity to tap into the collective knowledge of hundreds of legal professionals. This year at Legalweek 2024 we talked with peers in a wide variety of roles, from litigation support professionals and lawyers to partners and heads of innovation. Throughout the sessions and discussions,  we started to notice a few common themes, interesting trends, and helpful insights.

Here’s the best of what we learned.


 

“AI native” Gen Zers are comfortable on the cutting edge — from axios.com by Jennifer A. Kingson; via The Rundown AI

When it comes to generative AI at school and work, Gen Z says: Bring it on.

Why it matters: While some workers are fearful or ambivalent about how ChatGPT, DALL-E and their ilk will affect their jobs, many college students and newly minted grads think it can give them a career edge.

  • So-called “AI natives” who are studying the technology in school may have a leg up on older “digital natives” — the same way that “digital native” millennials smugly bested their “digital immigrant” elders.

 

 

Top 6 Use Cases of Generative AI in Education in 2024 — from research.aimultiple.com by Cem Dilmegani

Use cases included:

  1. Personalized Lessons
  2. Course Design
  3. Content Creation for Courses
  4. Data Privacy Protection for Analytical Models
  5. Restoring Old Learning Materials
  6. Tutoring

The Next Phase of AI in Education at the U.S. Department of Education — from medium.com by Office of Ed Tech

Why are we doing this work?
Over the past two years, the U.S. Department of Education has been committed to maintaining an ongoing conversation with educators, students, researchers, developers — and the educational community at large — related to the continuous progress of Artificial Intelligence (AI) development and its implications for teaching and learning.

Many educators are seeking resources clarifying what AI is and how it will impact their work and their students. Similarly, developers of educational technology (“edtech”) products seek guidance on what guardrails exist that can support their efforts. After the release of our May 2023 report Artificial Intelligence and the Future of Teaching and Learningwe heard the desire for more.


2024 EDUCAUSE AI Landscape Study — from library.educause.edu by Jenay Robert

Moving from reaction to action, higher education stakeholders are currently exploring the opportunities afforded by AI for teaching, learning, and work while maintaining a sense of caution for the vast array of risks AI-powered technologies pose. To aid in these efforts, we present this inaugural EDUCAUSE AI Landscape Study, in which we summarize the higher education community’s current sentiments and experiences related to strategic planning and readiness, policies and procedures, workforce, and the future of AI in higher education.


AI Update for K-16 Administrators: More People Need to Step-Up and Take the AI Bull By the Horns — from stefanbauschard.substack.com by Stefan Bauschard
AI capabilities are way beyond what most schools are aware of, and they will transform education and society over the next few years.

Educational administrators should not worry about every AI development, but should, instead focus on the big picture, as those big picture changes will change the entire world and the educational system.

AI and related technologies (robotics, synthetic biology, and brain-computer interfaces) will continue to impact society and the entire educational system over the next 10 years. This impact on the system will be greater than anything that has happened over the last 100 years, including COVID-19, as COVID-19 eventually ended and the disruptive force of these technologies will only continue to develop.

AI is the bull in the China Shop, redefining the world and the educational system. Students writing a paper with AI is barely a poke in the educational world relative to what is starting to happen (active AI teachers and tutors; AI assessment; AI glasses; immersive learning environments; young students able to start their own business with AI tools; AIs replacing and changing jobs; deep voice and video fakes; intelligence leveling; individualized instruction; interactive and highly intelligent computers; computers that can act autonomously; and more).


 

 

Digital Learning Pulse Survey Reveals Higher-Ed Unprepared for Expected Impact of AI — from prnewswire.com by Cengage
Research illustrates that while GenAI could ease ongoing challenges in education, just 1 in 5 say their school is ready

WASHINGTONFeb. 6, 2024 /PRNewswire/ — While three-quarters of higher-education trustees (83%), faculty (81%) and administrators (76%) agree that generative artificial intelligence (GenAI) will noticeably change their institutions in the next five years, community college trustees are more optimistic than their community college counterparts, with (37%) saying their organization is prepared for the change coming compared to just 16% of faculty and 11% of administrator respondents.

Those findings are from the 2023-2024 Digital Learning Pulse Survey conducted by Cengage and Bay View Analytics with support from the Association of Community College Trustees (ACCT), the Association of College and University Educators (ACUE), College Pulse and the United States Distance Learning Association (USDLA) to understand the attitudes and concerns of higher education instructors and leadership.

From DSC:
It takes time to understand what a given technology brings to the table…let alone a slew of emerging technologies under the artificial intelligence (AI) umbrella. It’s hard enough when the technology is fairly well established and not changing all the time. But its extremely difficult when significant change occurs almost daily. 

The limited staff within the teaching & learning centers out there need time to research and learn about the relevant technologies and how to apply those techs to instructional design. The already stretched thin faculty members need time to learn about those techs as well — and if and how they want to apply them. It takes time and research and effort.

Provosts, deans, presidents, and such need time to learn things as well.

Bottom line: We need to have realistic expectations here.


AI Adoption in Corporate L&D — from drphilippahardman.substack.com by Dr. Philippa Hardman
Where we are, and the importance of use cases in enabling change

At the end of last year, O’Reilly Media published a comprehensive report on the adoption and impact of generative AI within enterprises.

The headline of the report is that we’ve never seen a technology adopted in enterprise as fast as generative AI. As of November 2023, two-thirds (67%) of survey respondents reported that their companies are using generative AI.

However, the vast majority of AI adopters in enterprise are still in the early stages; they’re experimenting at the edges, rather than making larger-scale, strategic decisions on how to leverage AI to accelerate our progress towards org goals and visions.

The single biggest hurdle to AI adoption in large corporates is a lack of appropriate use cases.

 

15 legal Substacks worth your time — from jordanfurlong.substack.com by Jordan Furlong
It’s my great pleasure to return the compliment of a Substack Recommendation and direct your attention to these terrific legal newsletters.

In alphabetical order, they are:

  1. Daniel’s in-house legal newsletter, by Daniel Van Binsbergen, CEO at Lexoo, London. “One useful insight, idea or framework for in-house lawyers, every week.”
    —> Recommended post: Receiving feedback hurts
  2. Durant’s Rants, by Erin Durant, litigator, founder of Durant Barristers, Russell, ON. “Hot takes from an Ontario law firm. Home of the tea for subscribers.”
    —> Recommended post: Where have all the mid-career lawyers gone?
  3. GeoLegal Notes, by Sean West, Co-Founder Hence Technologies, Santa Monica, CA. “Bridging the gap between global affairs and legal practice; helping equip legal leaders to thrive against a backdrop of increasing global complexity.”
    —> Recommended post: Law and Politics of Supply Chains for Goods, AI and Knowledge

In addition…please also take a moment to click through and check out these ten other excellent newsletters:


Lawyering in the Age of Artificial Intelligence — from papers.ssrn.com and the University of Minnesota Law School by Jonathan H. Choi, Amy Monahan, & Daniel Schwarcz; via Tom Barrett

Abstract
We conducted the first randomized controlled trial to study the effect of AI assistance on human legal analysis. We randomly assigned law school students to complete realistic legal tasks either with or without the assistance of GPT-4. We tracked how long the students took on each task and blind-graded the results. We found that access to GPT-4 only slightly and inconsistently improved the quality of participants’ legal analysis but induced large and consistent increases in speed. AI assistance improved the quality of output unevenly—where it was useful at all, the lowest-skilled participants saw the largest improvements. On the other hand, AI assistance saved participants roughly the same amount of time regardless of their baseline speed. In follow up surveys, participants reported increased satisfaction from using AI to complete legal tasks and correctly predicted the tasks for which GPT-4 were most helpful. These results have important descriptive and normative implications for the future of lawyering. Descriptively, they suggest that AI assistance can significantly improve productivity and satisfaction, and that they can be selectively employed by lawyers in areas where they are most useful. Because these tools have an equalizing effect on performance, they may also promote equality in a famously unequal profession. Normatively, our findings suggest that law schools, lawyers, judges, and clients should affirmatively embrace AI tools and plan for a future in which they will become widespread.


Legal Week 2024 Special Part One: Joey Seeber of Level Legal — from geeklawblog.com by Greg Lambert & Marlene Gebauer

Welcome to the first of a few special Legal Week 2024 edition episodes of “The Geek in Review,” where we looked for innovative and creative ideas on the road and recorded live from the bustling environment of the 2024 Legal Week conference in New York.

Marlene Gebauer notes the transformation of Legal Week into a thought leadership conference, with a special mention of keynote speaker Bryan Cranston’s impactful talk on storytelling, branding, and the thoughtful application of AI in both the acting world and the legal tech space.


LegalWeek 2024 Special Part Two: Mollie Nichols and Mark Noel from Redgrave Data — from geeklawblog.com by Greg Lambert & Marlene Gebauer

In the second of a special series of interviews from Legal Week 2024 , co-hosts Greg Lambert and Marlene Gebauer welcomed Mollie Nichols, CEO, and Mark Noel, Chief Information and Technology Officer of Redgrave Data. Nichols and Noel discuss Redgrave Data’s mission to cut through the hype of legal tech innovations, particularly generative AI. Nichols emphasized the company’s focus on delivering custom solutions that meet clients’ actual needs and highlighted the importance of educating the legal community on effectively integrating new technologies into their practices.

Mark Noel emphasized the strategic addition of data scientists to their team, enabling Redgrave Data to develop and advise on cutting-edge technologies. He stressed the importance of applying generative AI judiciously, pointing out its limitations and the potential for misuse if not properly vetted. Noel and Nichols shared insights on navigating the legal tech landscape, emphasizing efficiency, data management, and the careful evaluation of tech solutions.


LexisNexis Report: What Every C Suite Leader Needs to Know about Legal AI — from deweybstrategic.com by  Jean O’Grady

Today LexisNexis Legal & Professional, released results from a survey of senior leadership at top U.S. law firms and legal professionals at Fortune 1000 companies. The survey 2024 Investing in Legal Innovation Survey: The Rise of GenAI at Top Firms & Corporations  explores  the business impact of generative AI technology on the legal industry.

I can’t recall any prior technology that has simultaneously trigged both of breathless enthusiasm and panicked resistance . While the technology shows game changing promise, there are significant ethical, client relations, security and intellectual property concerns which still need to be addressed. The C-Suite survey charts the issues of concern where are impeding adoption of GenAI.


GenAI for Law Is Like Google Maps (for Lawyers) — from elevate.law by Pratik Patel

These days, whenever a customer asks me for my view of generative AI for law and where and how they should be using it, I reply with a simple analogy: Think ‘Google Maps for lawyers’.

Analogising generative AI for law to mapping technology begins with thinking about legal matters as journeys. A similar conceptualisation appears in a recent article by Justin Turman, legal systems and processes manager at Stryker, with legal work as flight paths. Either way, you have a desired destination (winning a lawsuit, closing a deal, finalising a contract, etc.) and many possible ‘routes’ reflecting various strategic and tactical decisions along the way. At various junctures, it is helpful to get intelligence and directions on how to proceed. And, as with driving (or flying), circumstances are subject to change along the way – newly discovered facts, shifts in goals, etc.


Law Student’s Gen AI Product, Lexplug, Makes Briefing Cases A Breeze — from lawnext.com by Bob Ambrogi

Based on that bathroom-break induced inspiration, Neal went on to develop — and this week launch — Lexplug, a site developed for law students “to make interacting with cases more accessible, efficient, and engaging.”

Search, Query and Simplify Case BriefsAt its core, Lexplug is a library of case briefs, all created by Neal using GPT-4. So far, he has created 7,000 briefs, and hopes to have 50,000 by the end of the year. To decide which cases to prioritize, he collected a variety of syllabi for basic law school courses such as constitutional law and torts and extracted the key cases. He also has the full text of every briefed case.


The Justice Gap in Legal Tech: A Tale of Two Conferences and the Implications for A2J — from lawnext.com by Bob Ambrogi

But there is another, related, kind of justice gap in this country. It is the funding gap between those who are developing legal technology to better meet the legal needs of low-income Americans and those who are developing legal tech to serve large law firms and corporate legal departments.

At Legalweek, the focus of the conference is almost exclusively on tech for large law firms and corporate legal departments. The sponsors and exhibitors are focused on products for e-discovery, contract lifecycle management, large firm financial and business management, and the like. The programs, similarly, focus on data privacy, e-discovery, information governance, contract technology, and large-scale litigation.

The exhibit hall spans three floors, the booths are big and bright, and the vendors seemingly all throw parties that are over the top, or quite literally near the top, at venues such as the Rainbow Room at the top of Rockefeller Center, with freely flowing alcohol and plenty of food.

By contrast, at the ITC conference, the attendees come mostly from the ranks of legal aid offices, pro-bono programs, court self-help staff, and the like. The programs focus on how understaffed legal aid offices and understaffed courts and understaffed community programs can use technology to help meet the influx of low-income people seeking legal help.

The juxtaposition of the glitziness of one conference and the modesty of the other speaks to the larger issue of inequity in legal tech – and specifically financial inequity.

 
© 2024 | Daniel Christian