Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 

Summary: Accessible AI has killed traditional signals of legitimacy.

Experiments show $20 consumer tools can easily bypass verification. The solution is shifting toward contextual proof that verifies human uniqueness without exposing identity.


After Hours 1: The legal profession’s new value proposition — from jordanfurlong.substack.com by Jordan Furlong
The days of selling legal tasks by the hour are ending. Lawyers’ future value lies in safeguarding clients’ legal journeys by overcoming the most challenging obstacles on the way. Part 1 of 2.

As a result, legal work is dividing into two spheres, the first larger than the second: what Gen AI can satisfactorily address, and what it can’t.

  • Sphere 1: Legal Production. This is all the specialized intellectual work involved in generating legal solutions: researching, issue-spotting, summarizing, synthesizing, drafting, revising, reasoning, and analyzing. This is the bulk of lawyers’ traditional activity and billed hours. In future, it will be done faster, cheaper, and increasingly better with machines — either by clients themselves, or embedded in systems and platforms that reduce the need for lawyer involvement.
  • Sphere 2: Legal Judgment. This is higher-value work defined by the unpredictability, complexity, and impact of its challenges. In this sphere, you’ll find hard-decision advice, guidance under uncertainty, systematic dispute avoidance, strategic counsel, critical advocacy, risk prioritization, and high-stakes accountability. It’s likely (but far from certain) that this work will remain outside the reach of Gen AI. This is the sphere that holds the potential to support a future legal profession.

But not every legal journey is so simple or safe that the client can go it alone. Many times, Point B is more like Point F or Point R: a long and tortuous distance away. Many AI-generated maps will suggest a clear and direct route that bears little resemblance to the messy tangles of reality. On even moderately complex legal journeys, the unwelcome and the unexpected are always lurking. Something arises that was nowhere on the map, and until it gets resolved, the client can’t move any further towards their destination.


Below are some items from Jordan’s article — or by following a rabbit trail from his posting:


AI-Native Firms, Built by Private Equity, Will Strain Legacy Model — from news.bloomberglaw.com by Eric Dodson Greenberg

The emergence of AI-native law firms reveals the limits of a fixed binary that has characterized the legal market over the last year.

The straightest path to AI law firms isn’t innovation within the legacy model, or capital investing around it, but external capital being deployed to build competitors to legacy firms. These firms use AI and narrow regulatory openings to create from scratch tech-enabled law firms.

Not acquire them. Not invest around them.

Build them.

This third path is no longer theoretical.

The $3,500 Hour vs. The $500 Contract — from legaltechnologyhub.com by Brandi Pack

While rates at the top continue climbing, the operational foundation of legal work is being rebuilt.

Its pricing reflects that structure. Contract review between three and 50 pages costs $500. Short agreements are $250. Longer contracts are billed per page. Drafting from scratch is offered at a fixed fee. 

There is no running clock.

The premise is straightforward. If generative AI materially reduces the time required for standardized work, the cost base changes. And when the cost base changes, pricing models eventually follow.

.



From DSC:
This next item is not from Jordan, but may also be useful to some of you out there:

Want to Work at Legora, Harvey or Another Legal AI Startup? — from legallydisrupted.com by Zach Abramowitz
Podcast with a Biglaw Partner Who Now Occupies a Senior Role at Legora

In Episode 45 of Zach Abramowitz is Legally Disrupted, Kyle and dive into why building tech workflows and writing AI prompts should absolutely be considered billable work. We also explore why AI commoditizing the legal “grinders” and “minders” means old-school social skills are about to become your single biggest competitive advantage. Finally, Kyle goes into great detail about how exactly how he landed a top role at Legora and how others can do the same (hint: merely dropping your resume into a web portal is not enough).


 

 

AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Meta, YouTube found negligent in landmark social media addiction trial — from by Ian Duncan
A Los Angeles jury awarded $3 million in compensation to a young woman who alleged she had become addicted to the platforms as a child.

A Los Angeles jury found social media giant Meta and video platform YouTube negligent in a landmark trial, awarding $3 million in compensation to a young woman who alleged she had become addicted to the companies’ platforms as a child.

The verdict came at the end of a month-long trial that featured testimony by Facebook founder Mark Zuckerberg and a day after a jury in New Mexico ordered Meta to pay $375 million in penalties for endangering children. The twin verdicts are signs that legal protections which for decades made tech companies seem almost impervious are beginning to crack, as lawyers accuse the platforms of putting addictive or otherwise harmful features into their platforms.

With the armor of Silicon Valley companies fractured, they will now have to size up their appetite for future courtroom battles. There are thousands more lawsuits waiting to be heard, with young internet users, parents, school districts and state attorneys general all seeking to hold the industry accountable.

 

 

Legal AI Access at 83%, But Trust Issues Remain — from artificiallawyer.com

A new survey of over 200 inhouse and law firm leaders provides solid evidence that while AI tools are now ‘standard’ across our sector, that trust in AI outputs fundamentally drives usage, along with ROI – and vice versa.

The data, from ALSP Factor, shows that 83% had ‘broad AI access’, which is up from 61% in 2025, and in itself is a very positive development that tells us legal AI is now becoming ubiquitous for commercial lawyers, with around 54% using such tools ‘often’.

 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

Americans’ retirement accounts – and hardship withdrawals – hit new highs. Here’s what to know — from weforum.org by Spencer Feingold

  • Last year, US retirement account balances rose at double-digit rates, driven by strong market performance and steady contributions.
  • At the same time, hardship withdrawals increased, highlighting growing short-term financial stress.
  • The trend underscores the importance of financial education and resilience to support long-term retirement security.

From DSC:
I’m hoping that we are doing a better job in the United States on educating our youth on investing, saving, and developing better legal knowledge (i.e., the need for wills, estate planning, trusts, etc.).

 

 

2026 Survey of College and University Presidents — from insidehighered.com, Liaison, & Jenzabar
Download and explore exclusive insights from the 2026 Survey of College and University Presidents to see how these campus leaders are responding to financial volatility, political interference, rapid advances in AI, and where they believe the biggest risks and opportunities lie as they look toward 2030.

In this year’s survey, presidents share perspectives on:

  • How presidents assess the second Trump administration’s impact on higher education
  • Which emerging or evolving educational models they plan to add or expand in the coming years
  • How effective they believe higher education has been in shaping national conversations arout AI
  • The issues presidents expect will have the greatest impact on higher education by 2030

 

 

U.S. Department of Labor Defines 5 Key Areas of AI Literacy — from campustechnology.com by Rhea Kelly

Key Takeaways

  • Department of Labor releases AI Literacy Framework: The framework defines AI literacy as competencies for using and evaluating AI responsibly, with a primary focus on generative AI in the workplace.
  • Framework outlines five core AI literacy areas: These include understanding AI principles, exploring real-world uses, directing AI effectively, evaluating AI outputs, and using AI responsibly.
  • Guidance for workforce and education systems: The framework also provides training principles and recommendations for workers, employers, education providers, and government agencies to expand AI education and training.
 

National Study of Special Education Spending — from air.org

Federal, state, and local policymakers and education leaders urgently need up-to-date national estimates for what is spent to provide special education services to inform their funding policies and budget for special education expenses.

The National Study of Special Education Spending’s (NSSES) purpose is to update our understanding of the costs of special education and related services. The study will collect information from a national sample of districts and schools about what is spent to educate students with disabilities, as well as what states and districts spend to operate their special education programs and comply with federal and state laws. The Institute of Education Sciences within the Department of Education has partnered with AIR, NORC at the University of Chicago, and Allovue, a PowerSchool Company, to design the study.

Pilot Study
A pilot study for the NSSES study will take place during the 2024/25 and 2025/26 school years. The pilot study’s findings will help inform the study design for the full-scale national study, which is planned for 2026/27 school year.

The timeline for the 2025/26 pilot study is:

  • Summer 2025: District recruitment
  • Fall 2025: School recruitment within participating districts and sampling students within participating schools
  • December 2025—February 2026: Data collection, including surveys with district and school staff and financial data from districts
  • Spring 2026: Analysis of pilot study data and preparation for full-scale study
 

Anthropic unveils Claude legal plugin and causes market meltdown — from legaltechnology.com

Generative AI vendor Anthropic has unveiled a legal plugin that helps customise its large language model Claude for legal tasks such as document review, sending public legal software stocks into an ensuing spin today (3 February).

Anthropic entering the legal tech fray comes as part of the launch of a number of different plugins that help users instruct Claude on how to get work done and what tools and data to pull from. A sales plugin, for example could connect Claude to your CRM and knowledge base to help with prospect research and follow ups. The legal plug-in is described as being capable of, for example, reviewing documents, flagging risks, NDA triage, and tracking compliance. The significance is that Anthropic is shifting from model supplier to the application layer and workflow owner.

The announcement is hitting public publishing and legal software companies hard.


Also related/see:

Anthropic’s Legal Plugin for Claude Cowork May Be the Opening Salvo In A Competition Between Foundation Models and Legal Tech Incumbents — from lawnext.com by Bob Ambrogi

Two weeks after introducing a new general-purpose “agentic” work mode called Claude Cowork, Anthropic has now rolled out a legal plugin aimed squarely at the legal workflows of in-house counsel, including contract review, NDA triage, compliance checks, briefings and templated responses.

It is configurable to an organization’s own playbook and risk tolerances, and Anthropic explicitly frames it as assistance, not advice, cautioning that outputs should be reviewed by licensed attorneys.

It may sound like just another feature drop in a crowded AI market. But for legal tech, it is landing more like a tsunami than a drop. For the first time, a foundation-model company is packaging a legal workflow product directly into its platform, rather than merely supplying an API to legal-tech vendors.

 

The Learning and Employment Records (LER) Report for 2026: Building the infrastructure between learning and work — from smartresume.com; with thanks to Paul Fain for this resource

Executive Summary (excerpt)

This report documents a clear transition now underway: LERs are moving from small experiments to systems people and organizations expect to rely on. Adoption remains early and uneven, but the forces reshaping the ecosystem are no longer speculative. Federal policy signals, state planning cycles, standards maturation, and employer behavior are aligning in ways that suggest 2026 will mark a shift from exploration to execution.

Across interviews with federal leaders, state CIOs, standards bodies, and ecosystem builders, a consistent theme emerged: the traditional model—where institutions control learning and employment records—no longer fits how people move through education and work. In its place, a new model is being actively designed—one in which individuals hold portable, verifiable records that systems can trust without centralizing control.

Most states are not yet operating this way. But planning timelines, RFP language, and federal signals indicate that many will begin building toward this model in early 2026.

As the ecosystem matures, another insight becomes unavoidable: records alone are not enough. Value emerges only when trusted records can be interpreted through shared skill languages, reused across contexts, and embedded into the systems and marketplaces where decisions are made.

Learning and Employment Records are not a product category. They are a data layer—one that reshapes how learning, work, and opportunity connect over time.

This report is written for anyone seeking to understand how LERs are beginning to move from concept to practice. Whether readers are new to the space or actively exploring implementation, the report focuses on observable signals, emerging patterns, and the practical conditions required to move from experimentation toward durable infrastructure.

 

“The building blocks for a global, interoperable skills ecosystem are already in place. As education and workforce alignment accelerates, the path toward trusted, machine-readable credentials is clear. The next phase depends on credentials that carry value across institutions, industries, states, and borders; credentials that move with learners wherever their education and careers take them. The question now isn’t whether to act, but how quickly we move.”

– Curtiss Barnes, Chief Executive Officer, 1EdTech

 


The above item was from Paul Fain’s recent posting, which includes the following excerpt:

SmartResume just published a guide for making sense of this rapidly expanding landscape. The LER Ecosystem Report was produced in partnership with AACRAO, Credential Engine, 1EdTech, HR Open Standards, and the U.S. Chamber of Commerce Foundation. It was based on interviews and feedback gathered over three years from 100+ leaders across education, workforce, government, standards bodies, and tech providers.

The tools are available now to create the sort of interoperable ecosystem that can make talent marketplaces a reality, the report argues. Meanwhile, federal policy moves and bipartisan attention to LERs are accelerating action at the state level.

“For state leaders, this creates a practical inflection point,” says the report. “LERs are shifting from an innovation discussion to an infrastructure planning conversation.”

 

25 Big Ideas that will define 2026 — from linkedin.com by LinkedIn News
This year’s predictions capture a world in flux, where technology and humanity will press closer than ever, fueling new opportunities and tensions.

Blockchain: Blockchain technology will create new ways for creators to keep more of their revenue by enabling them to host their own content, bypassing traditional social media platforms that take a cut of their earnings.

3.AI: Artificial intelligence will enhance creators’ ability to scale their personal brands exponentially — producing more content, creating virtual influencers and expanding reach in ways we’ve never seen.

Laws around artificial intelligence in mental health care are set to change dramatically in 2026, in the wake of lawsuits alleging harm or “psychosis” linked to AI tools. After years of rapid adoption — and little oversight — regulators will move to treat therapy chatbots more like medical devices than lifestyle apps.

Small businesses — which make up 90% of companies globally — will be the top destination for young jobseekers in 2026.

Generative engine optimization (GEO) is set to replace search engine optimization (SEO) as the way brands get discovered in the year ahead. As consumers turn to AI chatbots, agentic workflows and answer engines, appearing prominently in generative outputs will matter more than ranking in search engines.

 

7 Legal Tech Trends That Will Reshape Every Business In 2026 — from forbes.com by Bernard Marr

Here are the trends that will matter most.

  1. AI Agents As Legal Assistants
  2. AI As A Driver Of Business Strategy
  3. Automation In Judicial Administration
  4. Always-On Compliance Monitoring
  5. Cybersecurity As An Essential Survival Tool
  6. Predictive Litigation
  7. Compliance As Part Of The Everyday Automation Fabric

According to the Thomson Reuters Future Of Professionals report, most experts already expect AI to transform their work within five years, with many viewing it as a positive force. The challenge now is clear: legal and compliance leaders must understand the tools reshaping their field and prepare their teams for a very different way of working in 2026.


Addendum on 12/17/25:

 

Beyond ChatGPT: Why In-House Counsel Need Purpose Built AI (Cecilia Ziniti, CEO – GC AI) — from tlpodcast.com

This episode features a conversation with Cecilia Ziniti, Co-Founder and CEO of GC.AI. Cecilia traces her career from the early days of the internet to founding an AI-driven legal platform for in-house counsel.

Cecilia shares her journey, starting as a paralegal at Yahoo in the early 2000s, working on nascent legal issues related to the internet. She discusses her time at Morrison & Foerster and her role at Amazon, where she was an early member of the Alexa team, gaining deep insight into AI’s potential before the rise of modern large language models (LLMs).

The core discussion centers on the creation of GC AI, a legal AI tool specifically designed for in-house counsel. Cecilia explains why general LLMs like ChatGPT are insufficient for professional legal work—lacking proper citation, context, and security/privilege protections. She highlights the app’s features, including enhanced document analysis (RAG implementation), a Word Add-in, and workflow-based playbooks to deliver accurate, client-forward legal analysis. The episode also touches on the current state of legal tech, the growing trend of bringing legal work in-house, and the potential for AI to shift the dynamics of the billable hour.

 
© 2025 | Daniel Christian