ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 




BIG unveils Suzhou Museum of Contemporary Art topped with ribbon-like roof — from dezeen.com by Christina Yao
.

Also from Dezeen:

MVRDV designs giant sphere for sports arena in Tirana — from dezeen.com by Starr Charles
.



 

The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 

Six Transformative Technology Trends Impacting the Legal Profession — from americanbar.org

Summary

  • Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
  • 75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
  • The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.

Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.

To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.

When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.

  1. Generative AI and AI-Assisted Drafting …
  2. Cloud-Based Practice Management…
  3. Cybersecurity and Data Privacy…
  4. Flat Fee and Alternative Billing Models…
  5. Legal Analytics and Data-Driven Decision Making…
  6. Virtual Law Firms and Remote Work…
 

KPMG wants junior consultants to ditch the grunt work and hand it over to teams of AI agents — from businessinsider.com by Polly Thompson

The Big Four consulting and accounting firm is training its junior consultants to manage teams of AI agents — digital assistants capable of completing tasks without human input.

“We want juniors to become managers of agents,” Niale Cleobury, KPMG’s global AI workforce lead, told Business Insider in an interview.

KPMG plans to give new consulting recruits access to a catalog of AI agents capable of creating presentation slides, analyzing data, and conducting in-depth research, Cleobury said.

The goal is for these agents to perform much of the analytical and administrative work once assigned to junior consultants, allowing them to become more involved in strategic decisions.


From DSC:
For a junior staff member to provide quality assurance in working with agents, an employee must know what they’re talking about in the first place. They must have expertise and relevant knowledge. Otherwise, how will they spot the hallucinations?

So the question is, how can businesses build such expertise in junior staff members while they are delegating things to an army of agents? This question applies to the next posting below as well. Having agents report to you is all well and good — IF you know when the agents are producing helpful/accurate information and when they got things all wrong.


This Is the Next Vital Job Skill in the AI Economy — from builtin.com by Saurabh Sharma
The future of tech work belongs to AI managers.

Summary: A fundamental shift is making knowledge workers “AI managers.” The most valuable employees will direct intelligent AI agents, which requires new competencies: delegation, quality assurance and workflow orchestration across multiple agents. Companies must bridge the training gap to enable this move from simple software use to strategic collaboration with intelligent, yet imperfect, systems.

The shift is happening subtly, but it’s happening. Workers are learning to prompt agents, navigate AI capabilities, understand failure modes and hand off complex tasks to AI. And if they haven’t started yet, they probably will: A new study from IDC and Salesforce found that 72 percent of CEOs think most employees will have an AI agent reporting to them within five years. This isn’t about using a new kind of software tool — it’s about directing intelligent systems that can reason, search, analyze and create.

Soon, the most valuable employees won’t just know how to use AI; they’ll know how to manage it. And that requires a fundamentally different skill set than anything we’ve taught in the workplace before.


AI agents failed 97% of freelance tasks; here’s why… — from theneurondaily.com by Grant Harvey

AI Agents Can’t Actually Do Your Job (Yet)—New Benchmark Reveals The Gap

DEEP DIVE: AI can make you faster at your job, but can only do 2-3% of jobs by itself.

The hype: AI agents will automate entire workflows! Replace freelancers! Handle complex tasks end-to-end!

The reality: a measly 2-3% completion rate.

See, Scale AI and CAIS just released the Remote Labor Index (paper), a benchmark where AI agents attempted real freelance tasks. The best-performing model earned just $1,810 out of $143,991 in available work, and yes, finishing only 2-3% of jobs.



 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

Nvidia becomes first $5 trillion company — from theaivallye.com by Barsee
PLUS: OpenAI IPO at $1 trillion valuation by late 2026 / early 2027

Nvidia has officially become the first company in history to cross the $5 trillion market cap, cementing its position as the undisputed leader of the AI era. Just three months ago, the chipmaker hit $4 trillion; it’s already added another trillion since.

Nvidia market cap milestones:

  • Jan 2020: $144 billion
  • May 2023: $1 trillion
  • Feb 2024: $2 trillion
  • Jun 2024: $3 trillion
  • Jul 2025: $4 trillion
  • Oct 2025: $5 trillion

The above posting linked to:

 

 

Adobe Reinvents its Entire Creative Suite with AI Co-Pilots, Custom Models, and a New Open Platform — from theneuron.ai by Grant Harvey
Adobe just put an AI co-pilot in every one of its apps, letting you chat with Photoshop, train models on your own style, and generate entire videos with a single subscription that now includes top models from Google, Runway, and Pika.

Adobe came to play, y’all.

At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.

Adobe’s new plan is to put an AI co-pilot in every single app.

  • For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
  • For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
  • The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.

Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey
Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.

On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.

From DSC:
As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.


Adobe Max 2025: all the latest creative tools and AI announcements — from theverge.com by Jess Weatherbed

The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.


Also see Adobe Delivers New AI Innovations, Assistants and Models Across Creative Cloud to Empower Creative Professionals plus other items from the News section from Adobe


 

 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

Chegg CEO steps down amid major AI-driven restructure — from linkedin.com by Megan McDonough

Edtech firm Chegg confirmed Monday it is reducing its workforce by 45%, or 388 employees globally, and its chief executive officer is stepping down. Current CEO Nathan Schultz will be replaced effective immediately by executive chairman (and former CEO) Dan Rosensweig. The rise of AI-powered tools has dealt a massive blow to the online homework helper and led to “substantial” declines in revenue and traffic. Company shares have slipped over 10% this year. Chegg recently explored a possible sale, but ultimately decided to keep the company intact.

 

The Bull and Bear Case For the AI Bubble, Explained — from theneuron.ai by Grant Harvey
AI is both a genuine technological revolution and a massive financial bubble, and the defining question is whether miraculous progress can outrun the catastrophic, multi-trillion-dollar cost required to achieve it.

This sets the stage for the defining conflict of our technological era. The narrative has split into two irreconcilable realities. In one, championed by bulls like venture capitalist Marc Andreessen and NVIDIA CEO Jensen Huang, we are at the dawn of “computer industry V2”—a platform shift so profound it will unlock unprecedented productivity and reshape civilization.

In the other, detailed by macro investors like Julien Garran and forensic bears like writer Ed Zitron, AI is a historically massive, circular, debt-fueled mania built on hype, propped up by a handful of insiders, and destined for a collapse that will make past busts look quaint.

This is a multi-layered conflict playing out across public stock markets, the private venture ecosystem, and the fundamental unit economics of the technology itself. To understand the future, and whether it holds a revolution, a ruinous crash, or a complex mixture of both, we must dissect every layer of the argument, from the historical parallels to the hard financial data and the technological critiques that question the very foundation of the boom.


From DSC:
I second what Grant said at the beginning of his analysis:

**The following is shared for educational purposes and is not intended to be financial advice; do your own research! 

But I post this because Grant provides both sides of the argument very well.


 

 

70% of Americans say feds shouldn’t control admissions, curriculum — from highereddive.com by Natalie Schwartz
The Public Religion Research Institute poll comes as the Trump administration is pressuring colleges to change their policies.

Dive Brief: 

  • Most polled Americans, 70%, disagreed that the federal government should control “admissions, faculty hiring, and curriculum at U.S. colleges and universities to ensure they do not teach inappropriate material,” according to a survey released Wednesday by the Public Religion Research Institute.
  • The majority of Americans across political parties — 84% of Democrats, 75% of independents and 58% of Republicans — disagreed with federal control over these elements of college operations.
  • The poll’s results come as the Trump administration seeks to exert control over college workings, including in its recent offer of priority for federal research funding in exchange for making sweeping policy changes aligned with the government’s priorities.

Also see:

 

The Most Innovative Law Schools (2025) — from abovethelaw.com by Staci Zaretsky
Forget dusty casebooks — today’s leaders in legal education are using AI, design thinking, and real-world labs to reinvent how law is taught.

“[F]rom AI labs and interdisciplinary centers to data-driven reform and bold new approaches to design and client service,” according to National Jurist’s preLaw Magazine, these are the law schools that “exemplify innovation in action.”

  1. North Carolina Central University School of Law
  2. Suffolk University Law School
  3. UC Berkeley School of Law
  4. Nova Southeastern University Shepard Broad College of Law
  5. Northeastern University School of Law
  6. Maurice A. Deane School of Law at Hofstra University
  7. Seattle University School of Law
  8. Case Western Reserve University School of Law
  9. University of Miami School of Law
  10. Benjamin N. Cardozo School of Law at Yeshiva University
  11. Vanderbilt University Law School
  12. Southwestern Law School

Click here to read short summaries of why each school made this year’s list of top innovators.


Clio’s Metamorphosis: From Practice Management To A Comprehensive AI And Law Practice Provider — from abovethelaw.com by Stephen Embry
Clio is no longer a practice management company. It’s much more of a comprehensive provider of all needs of its customers big and small.

Newton delivered what may have been the most consequential keynote in the company’s history and one that signals a shift by Clio from a traditional practice management provider to a comprehensive platform that essentially does everything for the business and practice of law.

Clio also earlier this year acquired vLex, the heavy-duty AI legal research player. The acquisition is pending regulatory approval. It is the vLex acquisition that is powering the Clio transformation that Newton described in his keynote.

vLex has a huge amount of legal data in its wheelhouse to power sophisticated legal AI research. On top of this data, vLex developed Vincent, a powerful AI tool to work with this data and enable all sorts of actions and work.

This means a couple of things. First, by acquiring vLex, Clio can now offer its customers AI legal research tools. Clio customers will no longer have to go one place for its practice management needs and a second place for its substantive legal work, like research. It makes what Clio can provide much more comprehensive and all inclusive.


‘Adventures In Legal Tech’: How AI Is Changing Law Firms — from abovethelaw.com
Ernie the Attorney shares his legal tech takes.

Artificial intelligence will give solos and small firms “a huge advantage,” according to one legal tech consultant.

In this episode of “Adventures in Legal Tech,” host Jared Correia speaks with Ernie Svenson — aka “Ernie the Attorney” — about the psychology behind resistance to change, how law firms are positioning their AI use, the power of technology for business development, and more.


Legal software: how to look for and compare AI in legal technology — from legal.thomsonreuters.com by Chris O’Leary

Highlights

  • Legal ops experts can categorize legal AI platforms and software by the ability to streamline key tasks such as legal research, document processing or analysis, and drafting.
  • The trustworthiness and accuracy of AI hinge on the quality of its underlying data; solutions like CoCounsel Legal are grounded in authoritative, expert-verified content from Westlaw and Practical Law, unlike providers that may rely on siloed or less reliable databases.
  • When evaluating legal software, firms should use a framework that assesses critical factors such as integration with existing tech stacks, security, scalability, user adoption, and vendor reputation.

ASU Law appoints a director of AI and Legal Tech Studio, advancing its initiative to reimagine legal education — from law.asu.edu

The Sandra Day O’Connor College of Law at Arizona State University appointed Sean Harrington as director of the newly established AI and Legal Tech Studio, a key milestone in ASU Law’s bold initiative to reimagine legal education for the artificial intelligence era. ASU, ranked No. 1 in innovation for the 11th consecutive year, drives AI solutions that enhance teaching, enrich student training and facilitate digital transformation.


The American Legal Technology Awards Name 2025 Winners — from natlawreview.com by Tom Martin

The sixth annual American Legal Technology Awards were presented on Wednesday, October 15th, at Suffolk University Law School (Boston), recognizing winners across ten categories. There were 211 nominees who were evaluated by 27 judges.

The honorees on the night included:

 
© 2025 | Daniel Christian