Nvidia just invested in the AI legal startup that’s splashing Jude Law ads everywhere — from cnbc.com by Kai Nicol-Schwarz

Key Points

  • Nvidia has backed Swedish AI legal tech Legora in a $50 million Series D extension, CNBC can reveal.
  • The chip giant has been ramping up startup investments in recent years.
  • Investors have been piling into to promising young AI companies as they bet big on the commercial potential of tech to reshape entire industries and bring big efficiency gains.

Legora is its first bet in the legal tech sector, according to Dealroom data.

The AI startup is building AI agents and tools to help lawyers automate and streamline workflows. 

 

FutureFit AI Announces Strategic Investment to Help Governments and Industries Navigate AI’s Impact on People & Jobs — from prnewswire.com; via Ryan Craig

NEW YORKApril 13, 2026 /PRNewswire/ — FutureFit AI, a global leader in AI-powered workforce development technology, today announced an investment from Achieve Partners, led by investor and author Ryan Craig,  to accelerate its mission of helping more people navigate to better jobs faster and cheaper at scale.

“For too long, the U.S. workforce system has relied on disparate and disconnected systems to try to bridge the gap between the skills workers bring to the table, and the jobs available in a fast-changing labor market. In the age of AI, the need for a better approach has only become more urgent,” said Ryan Craig, co-founder and managing director of Achieve and author of Apprentice NationA New U, and College Disrupted. “FutureFit AI is solving that problem by helping workforce organizations create clearer paths to career opportunity for workers and solve pressing talent gaps that hinder economic growth. Their work around the country has already demonstrated the ability to help more people get good jobs faster.”

“A mission that began with a simple question of ‘What if everyone had a GPS for their career’ has turned into years of working closely with government and industry leaders to respond to – and solve for – the impacts of digital transformation and AI on jobs and people,” added Ekhtiari. “Our partnership with Achieve will accelerate our work to build and scale the missing workforce transition infrastructure that our country and the world so badly need at this moment.”

 

Recording at LegalWeek in New York, Zach sits down with Shlomo Klapper (founder of Learned Hand) and Bridget McCormack, former Chief Justice of the Michigan Supreme Court and now CEO of the American Arbitration Association, to challenge one of the biggest double standards in legal AI: “AI for me, but not for thee.” Lawyers are now widely using AI like #Harvey and #Legora — and now more than ever #claude — but the moment it touches judges or arbitrators, support drops off.

That hesitation comes as courts are under real strain, with judges handling thousands of cases a year and only minutes to decide each one, and no realistic way to keep up. Shlomo describes Learned Hand’s “AI law clerk,” built to support judicial research, analysis, and drafting, while Bridget brings the perspective of someone who has both made decisions on the bench and has pioneered the American Arbitration Association’s AI Arbitrator, a first of its kind. The conversation moves beyond AI as an assistant and into a harder shift: AI as part of decision-making itself, and whether the system can continue to function without it.


Also see:

Are Judges the Next To Adopt AI? Is That a Good Thing? — from legallydisrupted.com by Zach Abramowitz
Episode 46 of Legally Disrupted Has the Two Best Experts on the Topic

This brings us to an admitted, glaring double standard between lawyers and judges. Lawyers are totally fine with lawyers using AI, but those same lawyers become apoplectic at the thought of judges or arbitrators using AI. It is very much “AI for me, but not for thee.” A survey last year from White & Case and Queen Mary University of London School of Law showed that nearly 90% of lawyers were deeply supportive of AI for their own research and analytics, but that support drops to just 23% when it comes to a judge or arbitrator using it to make a decision.

Yet, despite that hullabaloo, there is a massive need for alternative forms of intelligence in our courts. Right now, the system is drowning. We have state court trial judges disposing of 2,500 cases a year, meaning they have barely half an hour to spend on a single case. We are simply not going to lawyer our way out of this 50-year backlog. If we just use humans, we have a massive demand for intelligence but a severely limited supply. AI could step in to give these judges the capacity they desperately need for the courts to actually function.

 

From DSC:
I wish I had learned about the important financial, legal, and medical things (that are covered in the gifted article below) in high school!


How to Help Your Aging Loved Ones Plan for the Future— a gifted article from nytimes.com by Elie Levine
Learn as much as you can about setting up the financial, legal and medical components of late-in-life care — and do it earlier than you might think.

Making end-of-life plans for your loved ones can feel like a burden. It is, almost by definition, complicated, and it might require having difficult conversations and sorting through a seemingly endless stream of forms and terminology. But it’s essential to your family’s well-being — and it’s worth doing earlier than you might think.

The first thing to know: There’s no one-size-fits-all approach to planning. But think of this as a starter kit that covers how to handle your parents’ current or future health challenges, and how they’ll pay for medical care. (Knowing about their medications, current finances and living situation can also help you prepare for an emergency medical situation.) Below are some of the questions to consider and discuss with your loved ones.

 

Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 

Summary: Accessible AI has killed traditional signals of legitimacy.

Experiments show $20 consumer tools can easily bypass verification. The solution is shifting toward contextual proof that verifies human uniqueness without exposing identity.


After Hours 1: The legal profession’s new value proposition — from jordanfurlong.substack.com by Jordan Furlong
The days of selling legal tasks by the hour are ending. Lawyers’ future value lies in safeguarding clients’ legal journeys by overcoming the most challenging obstacles on the way. Part 1 of 2.

As a result, legal work is dividing into two spheres, the first larger than the second: what Gen AI can satisfactorily address, and what it can’t.

  • Sphere 1: Legal Production. This is all the specialized intellectual work involved in generating legal solutions: researching, issue-spotting, summarizing, synthesizing, drafting, revising, reasoning, and analyzing. This is the bulk of lawyers’ traditional activity and billed hours. In future, it will be done faster, cheaper, and increasingly better with machines — either by clients themselves, or embedded in systems and platforms that reduce the need for lawyer involvement.
  • Sphere 2: Legal Judgment. This is higher-value work defined by the unpredictability, complexity, and impact of its challenges. In this sphere, you’ll find hard-decision advice, guidance under uncertainty, systematic dispute avoidance, strategic counsel, critical advocacy, risk prioritization, and high-stakes accountability. It’s likely (but far from certain) that this work will remain outside the reach of Gen AI. This is the sphere that holds the potential to support a future legal profession.

But not every legal journey is so simple or safe that the client can go it alone. Many times, Point B is more like Point F or Point R: a long and tortuous distance away. Many AI-generated maps will suggest a clear and direct route that bears little resemblance to the messy tangles of reality. On even moderately complex legal journeys, the unwelcome and the unexpected are always lurking. Something arises that was nowhere on the map, and until it gets resolved, the client can’t move any further towards their destination.


Below are some items from Jordan’s article — or by following a rabbit trail from his posting:


AI-Native Firms, Built by Private Equity, Will Strain Legacy Model — from news.bloomberglaw.com by Eric Dodson Greenberg

The emergence of AI-native law firms reveals the limits of a fixed binary that has characterized the legal market over the last year.

The straightest path to AI law firms isn’t innovation within the legacy model, or capital investing around it, but external capital being deployed to build competitors to legacy firms. These firms use AI and narrow regulatory openings to create from scratch tech-enabled law firms.

Not acquire them. Not invest around them.

Build them.

This third path is no longer theoretical.

The $3,500 Hour vs. The $500 Contract — from legaltechnologyhub.com by Brandi Pack

While rates at the top continue climbing, the operational foundation of legal work is being rebuilt.

Its pricing reflects that structure. Contract review between three and 50 pages costs $500. Short agreements are $250. Longer contracts are billed per page. Drafting from scratch is offered at a fixed fee. 

There is no running clock.

The premise is straightforward. If generative AI materially reduces the time required for standardized work, the cost base changes. And when the cost base changes, pricing models eventually follow.

.



From DSC:
This next item is not from Jordan, but may also be useful to some of you out there:

Want to Work at Legora, Harvey or Another Legal AI Startup? — from legallydisrupted.com by Zach Abramowitz
Podcast with a Biglaw Partner Who Now Occupies a Senior Role at Legora

In Episode 45 of Zach Abramowitz is Legally Disrupted, Kyle and dive into why building tech workflows and writing AI prompts should absolutely be considered billable work. We also explore why AI commoditizing the legal “grinders” and “minders” means old-school social skills are about to become your single biggest competitive advantage. Finally, Kyle goes into great detail about how exactly how he landed a top role at Legora and how others can do the same (hint: merely dropping your resume into a web portal is not enough).


 

 

AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Meta, YouTube found negligent in landmark social media addiction trial — from by Ian Duncan
A Los Angeles jury awarded $3 million in compensation to a young woman who alleged she had become addicted to the platforms as a child.

A Los Angeles jury found social media giant Meta and video platform YouTube negligent in a landmark trial, awarding $3 million in compensation to a young woman who alleged she had become addicted to the companies’ platforms as a child.

The verdict came at the end of a month-long trial that featured testimony by Facebook founder Mark Zuckerberg and a day after a jury in New Mexico ordered Meta to pay $375 million in penalties for endangering children. The twin verdicts are signs that legal protections which for decades made tech companies seem almost impervious are beginning to crack, as lawyers accuse the platforms of putting addictive or otherwise harmful features into their platforms.

With the armor of Silicon Valley companies fractured, they will now have to size up their appetite for future courtroom battles. There are thousands more lawsuits waiting to be heard, with young internet users, parents, school districts and state attorneys general all seeking to hold the industry accountable.

 

 

Legal AI Access at 83%, But Trust Issues Remain — from artificiallawyer.com

A new survey of over 200 inhouse and law firm leaders provides solid evidence that while AI tools are now ‘standard’ across our sector, that trust in AI outputs fundamentally drives usage, along with ROI – and vice versa.

The data, from ALSP Factor, shows that 83% had ‘broad AI access’, which is up from 61% in 2025, and in itself is a very positive development that tells us legal AI is now becoming ubiquitous for commercial lawyers, with around 54% using such tools ‘often’.

 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

Americans’ retirement accounts – and hardship withdrawals – hit new highs. Here’s what to know — from weforum.org by Spencer Feingold

  • Last year, US retirement account balances rose at double-digit rates, driven by strong market performance and steady contributions.
  • At the same time, hardship withdrawals increased, highlighting growing short-term financial stress.
  • The trend underscores the importance of financial education and resilience to support long-term retirement security.

From DSC:
I’m hoping that we are doing a better job in the United States on educating our youth on investing, saving, and developing better legal knowledge (i.e., the need for wills, estate planning, trusts, etc.).

 

 

2026 Survey of College and University Presidents — from insidehighered.com, Liaison, & Jenzabar
Download and explore exclusive insights from the 2026 Survey of College and University Presidents to see how these campus leaders are responding to financial volatility, political interference, rapid advances in AI, and where they believe the biggest risks and opportunities lie as they look toward 2030.

In this year’s survey, presidents share perspectives on:

  • How presidents assess the second Trump administration’s impact on higher education
  • Which emerging or evolving educational models they plan to add or expand in the coming years
  • How effective they believe higher education has been in shaping national conversations arout AI
  • The issues presidents expect will have the greatest impact on higher education by 2030

 

 

U.S. Department of Labor Defines 5 Key Areas of AI Literacy — from campustechnology.com by Rhea Kelly

Key Takeaways

  • Department of Labor releases AI Literacy Framework: The framework defines AI literacy as competencies for using and evaluating AI responsibly, with a primary focus on generative AI in the workplace.
  • Framework outlines five core AI literacy areas: These include understanding AI principles, exploring real-world uses, directing AI effectively, evaluating AI outputs, and using AI responsibly.
  • Guidance for workforce and education systems: The framework also provides training principles and recommendations for workers, employers, education providers, and government agencies to expand AI education and training.
 

National Study of Special Education Spending — from air.org

Federal, state, and local policymakers and education leaders urgently need up-to-date national estimates for what is spent to provide special education services to inform their funding policies and budget for special education expenses.

The National Study of Special Education Spending’s (NSSES) purpose is to update our understanding of the costs of special education and related services. The study will collect information from a national sample of districts and schools about what is spent to educate students with disabilities, as well as what states and districts spend to operate their special education programs and comply with federal and state laws. The Institute of Education Sciences within the Department of Education has partnered with AIR, NORC at the University of Chicago, and Allovue, a PowerSchool Company, to design the study.

Pilot Study
A pilot study for the NSSES study will take place during the 2024/25 and 2025/26 school years. The pilot study’s findings will help inform the study design for the full-scale national study, which is planned for 2026/27 school year.

The timeline for the 2025/26 pilot study is:

  • Summer 2025: District recruitment
  • Fall 2025: School recruitment within participating districts and sampling students within participating schools
  • December 2025—February 2026: Data collection, including surveys with district and school staff and financial data from districts
  • Spring 2026: Analysis of pilot study data and preparation for full-scale study
 

Anthropic unveils Claude legal plugin and causes market meltdown — from legaltechnology.com

Generative AI vendor Anthropic has unveiled a legal plugin that helps customise its large language model Claude for legal tasks such as document review, sending public legal software stocks into an ensuing spin today (3 February).

Anthropic entering the legal tech fray comes as part of the launch of a number of different plugins that help users instruct Claude on how to get work done and what tools and data to pull from. A sales plugin, for example could connect Claude to your CRM and knowledge base to help with prospect research and follow ups. The legal plug-in is described as being capable of, for example, reviewing documents, flagging risks, NDA triage, and tracking compliance. The significance is that Anthropic is shifting from model supplier to the application layer and workflow owner.

The announcement is hitting public publishing and legal software companies hard.


Also related/see:

Anthropic’s Legal Plugin for Claude Cowork May Be the Opening Salvo In A Competition Between Foundation Models and Legal Tech Incumbents — from lawnext.com by Bob Ambrogi

Two weeks after introducing a new general-purpose “agentic” work mode called Claude Cowork, Anthropic has now rolled out a legal plugin aimed squarely at the legal workflows of in-house counsel, including contract review, NDA triage, compliance checks, briefings and templated responses.

It is configurable to an organization’s own playbook and risk tolerances, and Anthropic explicitly frames it as assistance, not advice, cautioning that outputs should be reviewed by licensed attorneys.

It may sound like just another feature drop in a crowded AI market. But for legal tech, it is landing more like a tsunami than a drop. For the first time, a foundation-model company is packaging a legal workflow product directly into its platform, rather than merely supplying an API to legal-tech vendors.

 
© 2025 | Daniel Christian