Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 

What the Future of Learning Looks Like in the Era of AI — from the Center for Academic Innovation at the University of Michigan, by Sean Corp

AI & the Future of Learning Summit brings industry, education leaders together to discuss higher education’s opportunity to lead, what students need, and what partnerships are possible

As artificial intelligence rapidly reshapes the nature of work and learning, speakers at the University of Michigan’s AI & the Future of Learning Summit delivered a clear message: higher education must take a leading role in defining what comes next.

One CEO of a leading educational technology company put it like this: “The only bad thing would be universities standing still.”

Universities must embrace their roles as providers of continuous, lifelong learning that evolves alongside technological change. 


This shift is already affecting early-career pathways. Employers are placing greater emphasis on experience, while traditional entry-level roles are becoming less accessible. There is often a gap between what a credential represents and the expectations of employers.

That gap is particularly evident in access to internships. Chris Parrish, co-founder and president of Podium, noted that millions of students compete for a limited number of internships each year, making it increasingly difficult to gain the experience employers demand.

“If you miss out on an internship, you’re twice as likely to be unemployed,” Parrish said. 

 

Summary: Accessible AI has killed traditional signals of legitimacy.

Experiments show $20 consumer tools can easily bypass verification. The solution is shifting toward contextual proof that verifies human uniqueness without exposing identity.


After Hours 1: The legal profession’s new value proposition — from jordanfurlong.substack.com by Jordan Furlong
The days of selling legal tasks by the hour are ending. Lawyers’ future value lies in safeguarding clients’ legal journeys by overcoming the most challenging obstacles on the way. Part 1 of 2.

As a result, legal work is dividing into two spheres, the first larger than the second: what Gen AI can satisfactorily address, and what it can’t.

  • Sphere 1: Legal Production. This is all the specialized intellectual work involved in generating legal solutions: researching, issue-spotting, summarizing, synthesizing, drafting, revising, reasoning, and analyzing. This is the bulk of lawyers’ traditional activity and billed hours. In future, it will be done faster, cheaper, and increasingly better with machines — either by clients themselves, or embedded in systems and platforms that reduce the need for lawyer involvement.
  • Sphere 2: Legal Judgment. This is higher-value work defined by the unpredictability, complexity, and impact of its challenges. In this sphere, you’ll find hard-decision advice, guidance under uncertainty, systematic dispute avoidance, strategic counsel, critical advocacy, risk prioritization, and high-stakes accountability. It’s likely (but far from certain) that this work will remain outside the reach of Gen AI. This is the sphere that holds the potential to support a future legal profession.

But not every legal journey is so simple or safe that the client can go it alone. Many times, Point B is more like Point F or Point R: a long and tortuous distance away. Many AI-generated maps will suggest a clear and direct route that bears little resemblance to the messy tangles of reality. On even moderately complex legal journeys, the unwelcome and the unexpected are always lurking. Something arises that was nowhere on the map, and until it gets resolved, the client can’t move any further towards their destination.


Below are some items from Jordan’s article — or by following a rabbit trail from his posting:


AI-Native Firms, Built by Private Equity, Will Strain Legacy Model — from news.bloomberglaw.com by Eric Dodson Greenberg

The emergence of AI-native law firms reveals the limits of a fixed binary that has characterized the legal market over the last year.

The straightest path to AI law firms isn’t innovation within the legacy model, or capital investing around it, but external capital being deployed to build competitors to legacy firms. These firms use AI and narrow regulatory openings to create from scratch tech-enabled law firms.

Not acquire them. Not invest around them.

Build them.

This third path is no longer theoretical.

The $3,500 Hour vs. The $500 Contract — from legaltechnologyhub.com by Brandi Pack

While rates at the top continue climbing, the operational foundation of legal work is being rebuilt.

Its pricing reflects that structure. Contract review between three and 50 pages costs $500. Short agreements are $250. Longer contracts are billed per page. Drafting from scratch is offered at a fixed fee. 

There is no running clock.

The premise is straightforward. If generative AI materially reduces the time required for standardized work, the cost base changes. And when the cost base changes, pricing models eventually follow.

.



From DSC:
This next item is not from Jordan, but may also be useful to some of you out there:

Want to Work at Legora, Harvey or Another Legal AI Startup? — from legallydisrupted.com by Zach Abramowitz
Podcast with a Biglaw Partner Who Now Occupies a Senior Role at Legora

In Episode 45 of Zach Abramowitz is Legally Disrupted, Kyle and dive into why building tech workflows and writing AI prompts should absolutely be considered billable work. We also explore why AI commoditizing the legal “grinders” and “minders” means old-school social skills are about to become your single biggest competitive advantage. Finally, Kyle goes into great detail about how exactly how he landed a top role at Legora and how others can do the same (hint: merely dropping your resume into a web portal is not enough).


 

 

The quest to build a better AI tutor — from hechingerreport.org by Jill Barshay
Researchers make progress with an older ed tech idea: personalized practice

One promising idea has less to do with how an AI tutor explains concepts and more with what it asks students to practice next.

A team at the University of Pennsylvania, which included some AI skeptics, recently tested this approach in a study of close to 800 Taiwanese high school students learning Python programming. All the students used the same AI tutor, which was designed not to give away answers.

But there was one key difference. Half the students were randomly assigned to a fixed sequence of practice problems, progressing from easy to hard. The other half received a personalized sequence with the AI tutor continuously adjusting the difficulty of each problem based on how the student was performing and interacting with the chatbot.

The idea is based on what educators call the “zone of proximal development.” When problems are too easy, students get bored. When they’re too hard, students get frustrated. The goal is to keep students in a sweet spot: challenged, but not overwhelmed.

The researchers found that students in the personalized group did better on a final exam than students in the fixed problem group. The difference was characterized as the equivalent of 6 to 9 months of additional schooling, an eye-catching claim for an after-school online course that lasted only five months.

To address this, Chung’s team combined a large language model with a separate machine-learning algorithm that analyzes how students interact with the online course platform — how they answer the practice questions, how many times they revise or edit their coding, and the quality of their conversations with the chatbot — and uses that information to decide which problem to serve up next.

 

AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Meta, YouTube found negligent in landmark social media addiction trial — from by Ian Duncan
A Los Angeles jury awarded $3 million in compensation to a young woman who alleged she had become addicted to the platforms as a child.

A Los Angeles jury found social media giant Meta and video platform YouTube negligent in a landmark trial, awarding $3 million in compensation to a young woman who alleged she had become addicted to the companies’ platforms as a child.

The verdict came at the end of a month-long trial that featured testimony by Facebook founder Mark Zuckerberg and a day after a jury in New Mexico ordered Meta to pay $375 million in penalties for endangering children. The twin verdicts are signs that legal protections which for decades made tech companies seem almost impervious are beginning to crack, as lawyers accuse the platforms of putting addictive or otherwise harmful features into their platforms.

With the armor of Silicon Valley companies fractured, they will now have to size up their appetite for future courtroom battles. There are thousands more lawsuits waiting to be heard, with young internet users, parents, school districts and state attorneys general all seeking to hold the industry accountable.

 

 

From DSC:
I have been proposing that the AI-based learning platform of the future will be constantly doing this — every single day. It will know what the in-demand skills are — at any given moment in time. It will then be able to direct you to resources that will help you gain those skills. Though in my vision, the system is querying actual/open job descriptions, not analyzing learning data from enterprise learners. Perhaps I should add that to the vision.


Coursera’s Job Skills Report 2026: Top skills for your students — from coursera.org

The Job Skills Report 2026 analyzes learning data from more than 6 million enterprise learners to identify the future job skills organizations need most. It’s designed for HR and L&D leaders; data, IT, and software & product development leaders; higher education administrators; and government agencies seeking actionable insights on workforce skills trends and AI-driven transformation.

Drawing on data from 6 million enterprise learners across nearly 7,000 organizations, the Job Skills Report 2026 guides you through the skills reshaping the global economy. This year’s analysis spans Data, IT, and Software & Product Development—and the Generative AI skills becoming essential for every role.

 
 
 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

The Future of College in an AI World — from linkedin.com by Jeff Selingo
In today’s issue: The tension over AI in higher ed; application inflation continues and testing is back; what’s the future of the original classroom technology, the learning management system. 


Hundreds of higher ed and industry leaders gathered Tuesday for a summit
on AI and the future of learning at the University of Michigan.
.

Conversations like the one we had at Michigan this week are necessary, but the action rarely matches the ambition.

  • We say the humanities are the operating system of an AI world, yet students and parents don’t believe it. They’re voting with their feet toward STEM, business, and narrowly tailored majors they believe will lead to a job.
  • Meanwhile, colleges are quietly eliminating the very humanities degrees the panelists were championing, employers are cutting the entry rungs off the career ladder for new graduates, and as Podium Education co-founder Christopher Parrish reminded us yesterday, there’s a yawning gap between demand for experience and the internships that actually exist.


AI Music Generators: Teaching With These Catchy AI Tools — from techlearning.com by Erik Ofgang
AI music generators are getting better and better, and there are more applications in the classroom as a result.

Are All AI Music Generators More Or Less The Same?
No. After experimenting with a few various free ones, I found a wide range of quality with the same prompts.

Gemini is the only one I’d currently recommend. It’s user-friendly but limited and only creates 30-second clips. Other music generators could potentially outperform Gemini with prompt adjustments. The ones I tried did better with the instrumentals but struggled more with the lyrics, and that kind of defeated the purpose of the tool for me.


ChatDOC: Teaching With The AI Summarizing Tool — from techlearning.com by Erik Ofgang
ChatDOC lets users turn any PDF into an AI chatbot that can summarize the text, answer questions, and generate quizzes.

What Is ChatDOC?
ChatDOC is an AI designed to help users interact with PDFs of various types, be it research papers, short stories, or chapters from larger works. Users upload a PDF and then have the opportunity to “chat” with that document, that is speak with a chatbot that bases its answers off of the uploaded text.

ChatDOC can perform tasks such as provide a short summary, search for specific terms, explain the overall theme if it’s a work of literature, or unpack the science in a research paper.

Other similar tools are out there, but ChatDOC is definitely one of the better PDF readers I’ve used. Its free version is quick and easy-to-use, and delivers on its promise of providing an AI that can discuss a given document with users and even quiz them on it.


From AI access to workforce readiness — from chieflearningofficer.com by Johnny Hamilton, Amy Stratbucker, & Brad Bigelow
Is your workforce using the right tool with an outdated mindset and playbook? Why old playbooks fall short — and what learning leaders must do next.

The leadership opportunity
Organizations do not need to predict every future AI capability. They need systems that allow people to explore with curiosity, practice safely, reflect deeply and adapt continuously — starting with what they already have and extending as capabilities evolve.

For CLOs, this is a moment to lead from the center of change — designing workforce readiness that keeps pace with accelerating technology while making work more rewarding for employees and more valuable for the organization. That is how AI moves from the promise of transformation to demonstrated readiness and, ultimately, from promise to performance.


Addendums on 3/19/26:
How to Build Practice-Based Learning Activities with AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Four evidence-based methods for designing, building & deploying active learning activities with your favourite LLM

Most L&D teams are using AI to make content faster. The real opportunity is using it as a practice engine.

The Synthesia 2026 AI in L&D Report f2026 AI in L&D Report found that the fastest-growing areas of planned AI adoption aren’t in content creation — they’re in assessments and simulations (36%), adaptive pathways (33%), and AI tutors (29%). In other words: L&D teams are starting to realise that the most powerful use of AI isn’t producing learning materials. It’s creating environments where learners actually practise.

And you can build these right now — no dev team, no custom platform, no code. Each method below includes a prompt you can paste into your preferred AI tool to generate a working interactive prototype: a self-contained practice activity with a briefing screen, a live AI interaction, and a debrief — all running in the browser, ready to share with stakeholders or deploy to learners.

OpenAI Adds Interactive Math and Science Learning Tools to ChatGPT — from campustechnology.com by Rhea Kelly

Key Takeaways

  • ChatGPT adds interactive learning tools: OpenAI introduced interactive math and science visualizations that allow users to explore formulas, variables, and relationships in real time.
  • The tool currently covers over 70 core math and science topics and is aimed initially at high school and college-level learners.
  • Users can adjust variables, manipulate formulas, and immediately see how changes affect graphs and outcomes.
 

Teach Smarter with AI — from wondertools.substack.com by Jeremy Caplan and Lance Eaton
10 tested strategies from two educators who actually use them

I recently talked with Lance Eaton, Senior Associate Director of AI and Teaching & Learning at Northeastern University and writer of AI + Education = Simplified. We traded ideas about what’s actually working. We came up with 10 specific, practical ways anyone who teaches, coaches, or leads can put AI to work.

Watch the full conversation above, or read highlights below.


Beyond Audio Summaries: How to Use NotebookLM to *Actually* Design Better Learning — from drphilippahardman.substack.com by Dr. Philippa Hardman
Five methods to maximise the value of NotebookLM’s features

In practice, what makes NotebookLM different for learning designers is four things:

  • Answers grounded in your sources (with citations):
  • Source toggling:
  • Multi-format studio & multi-source summaries:
  • Persistent workspace:


5 Evidence-Based Methods NotebookLM Operationalises…


Shadow AI Isn’t a Threat: It’s a Signal — from campustechnology.com by Damien Eversmann
Unofficial AI use on campus reveals more about institutional gaps than misbehavior.

Key Takeaways

  • Shadow AI is widespread in higher education: Faculty, researchers, students, and staff are using AI tools outside official IT channels, including consumer platforms and public cloud services that may involve sensitive data.
  • Unauthorized AI use creates data, compliance, and cost risks: Consumer AI tools may store or reuse user data, while uncoordinated adoption drives redundant licenses, unpredictable cloud costs, and weaker security oversight.
  • Institutions are shifting from restriction to enablement: Some campuses are making approved paths easier by offering ready-to-use research environments, campus-managed AI tools, clear guidance on data and vendors, and streamlined approval processes.

How L&D Can Lead in the Age of AI Even If Your Company’s Not Ready — from learningguild.com

How to lead even when your company doesn’t allow AI
Even if your corporation isn’t ready for AI, you can still research tools personally to stay ahead of the curve, so when organizational restrictions lift, you are ready to use AI for learning right away. Here are some tools you can test at home if they’re restricted in your workplace:

  • Content generation – Start testing text-based tools to get a taste of how AI can accelerate content creation. Then take it to the next level by exploring tools that generate voices, music, and sound effects.
  • AI coaching tools – Have AI pose as a customer co-worker or customer to get a taste of what it’s like to use it as a conversation coach. Next, use the voice and video capabilities in an app like ChatGPT to explore how AI can coach someone through tasks.
  • In-the-flow learning assistants – Test turning documents into a conversational avatar and interacting with it to see how it feels. Then think about how the technology could potentially transform static content into dynamic learning experiences for employees.
  • Vibe-coded simulations – Experiment with this technology by creating a simple, fun game. Afterwards, brainstorm some ideas on how it could quickly create simulations for your learners in the future.

The Higher Ed Playbook for AI Affordability — from campustechnology.com by Jason Dunn-Potter

Key Takeaways

  • Affordable AI adoption focuses on evolving existing systems: Universities are embedding AI into current devices, workflows, and legacy systems rather than rebuilding infrastructure or investing in new data centers.
  • Edge AI reduces costs and improves access: Running AI models on local devices or networks lowers cloud processing costs, enhances security, and supports learning use cases such as tutoring, translation, transcription, and adaptive learning.
  • Enterprise integration and governance drive impact: Institutions are applying AI across admissions, advising, facilities, and research workflows, supported by shared resource hubs, data governance, AI literacy, and outcome-driven implementation.
 
 

“But what’s happening right now is exponential.” — from linkedin.com by Josh Cavalier

Excerpt:

I need to be honest with you. I’ve been running experiments this week with Claude Code and Opus 4.6, and we have reached the precipice in the collapse of time required to produce high-quality text-based ID outputs.

This includes performance consulting reports, learning needs analyses, action mapping, scripts, storyboards, facilitator guides, rubrics, and technical specs.

I just mapped the entire performance consulting process into a multimodal AI integration architecture (diagram image). Every phase. Entry and contracting. Performance analysis. Cause analysis. Solution design. Implementation. Evaluation. Thirty files. System specifications for each. The next step is to vet out each “skill” with an expert performance consultant.

Then I attempted a learning output: an 8-module course built with a cognitive scaffold that moves beyond content delivery to facilitate deliberate practice, meaning-making, and guided reflection within the learner’s own context.

The result:



AI and human-centered learning — from linkedin.com by Patrick Blessinger

Democratizing opportunities

AI adaptive learning can adapt learning in real-time. These tools have the potential to provide a more personalized learning experience, but only if used properly.

The California State University system uses ChatGPT Edu (OpenAI, 2025). Students use it for AI-assisted tutoring, study aids, and writing support. These resources provide 24/7 availability of subject-matter expertise tailored to students’ learning needs. It is not a replacement for professors. Rather, it extends the reach of mentorship by reducing access barriers.

However, we must proceed with intellectual humility and ethical responsibility. Even though AI can customize messages, it cannot replace the encouragement of a teacher or professor, or the social and emotional aspects of learning. It’s at the intersection of humanistic values and knowledge development that education must find its balance.

 

Something Big Is Happening — from shumer.dev by Matt Shumer; see below from the BIG Questions Institute, where I got this article from

I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.


They’ve now done it. And they’re moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.


What “Something Big Is Happening” Means for Schools — from/by the BIG Questions Institute
Matt Shumer’s newsletter post Something Big is Happening has been read over 80 million times within the week when it was published, on February 9.

Still, it’s worth reading Shumer’s post. Given the claims and warnings in Something Big Is Happening (and countless other articles), how would you truly, honestly respond to these questions:

  • What will the purpose of school be in 5 years?
  • What are we doing now that we must leave behind right away?
  • What can we leave behind gradually?
  • What does rigor look like in this AI-powered world?
  • Does our strategy look like making adjustments at the margins or are we preparing our students for a fundamental shift?
  • What is our definition of success? How do the the implications of AI and jobs (and other important forces, from geopolitical shifts and climate change, to mental health needs and shifting generational values) impact the outcomes we prioritize? What is the story of success we want to pass on to our students and wider community?
 
© 2025 | Daniel Christian