Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 
 

What the Future of Learning Looks Like in the Era of AI — from the Center for Academic Innovation at the University of Michigan, by Sean Corp

AI & the Future of Learning Summit brings industry, education leaders together to discuss higher education’s opportunity to lead, what students need, and what partnerships are possible

As artificial intelligence rapidly reshapes the nature of work and learning, speakers at the University of Michigan’s AI & the Future of Learning Summit delivered a clear message: higher education must take a leading role in defining what comes next.

One CEO of a leading educational technology company put it like this: “The only bad thing would be universities standing still.”

Universities must embrace their roles as providers of continuous, lifelong learning that evolves alongside technological change. 


This shift is already affecting early-career pathways. Employers are placing greater emphasis on experience, while traditional entry-level roles are becoming less accessible. There is often a gap between what a credential represents and the expectations of employers.

That gap is particularly evident in access to internships. Chris Parrish, co-founder and president of Podium, noted that millions of students compete for a limited number of internships each year, making it increasingly difficult to gain the experience employers demand.

“If you miss out on an internship, you’re twice as likely to be unemployed,” Parrish said. 

 

You Can’t Future-Proof Your Career From AI, But You Can Do This — from builtin.com by Liz Tran
Agility has become the most important skill to cultivate in today’s job market. Here’s how to get started.

Summary: Job seekers facing future panic should prioritize agility over information consumption. Build it by focusing on 30-day action experiments, reframing resumes around durable skills like problem-solving and embracing uncertainty through stretch applications and real-world feedback.

The antidote is what I call AQ — the agility quotient — which is your capacity to face change, disappointment and uncertainty without losing your footing. Unlike IQ, which measures what you know, AQ measures how fast you adapt when the rules change. Right now, it’s the most important career asset you have. Here’s how to build it.

What Is Agility Quotient (AQ)?
AQ is a measure of an individual’s capacity to adapt quickly when rules, industries or circumstances change. Unlike IQ, which focuses on existing knowledge, AQ emphasizes the ability to face uncertainty and disappointment without losing one’s footing, prioritizing action and iteration over exhaustive planning.

 

Summary: Accessible AI has killed traditional signals of legitimacy.

Experiments show $20 consumer tools can easily bypass verification. The solution is shifting toward contextual proof that verifies human uniqueness without exposing identity.


After Hours 1: The legal profession’s new value proposition — from jordanfurlong.substack.com by Jordan Furlong
The days of selling legal tasks by the hour are ending. Lawyers’ future value lies in safeguarding clients’ legal journeys by overcoming the most challenging obstacles on the way. Part 1 of 2.

As a result, legal work is dividing into two spheres, the first larger than the second: what Gen AI can satisfactorily address, and what it can’t.

  • Sphere 1: Legal Production. This is all the specialized intellectual work involved in generating legal solutions: researching, issue-spotting, summarizing, synthesizing, drafting, revising, reasoning, and analyzing. This is the bulk of lawyers’ traditional activity and billed hours. In future, it will be done faster, cheaper, and increasingly better with machines — either by clients themselves, or embedded in systems and platforms that reduce the need for lawyer involvement.
  • Sphere 2: Legal Judgment. This is higher-value work defined by the unpredictability, complexity, and impact of its challenges. In this sphere, you’ll find hard-decision advice, guidance under uncertainty, systematic dispute avoidance, strategic counsel, critical advocacy, risk prioritization, and high-stakes accountability. It’s likely (but far from certain) that this work will remain outside the reach of Gen AI. This is the sphere that holds the potential to support a future legal profession.

But not every legal journey is so simple or safe that the client can go it alone. Many times, Point B is more like Point F or Point R: a long and tortuous distance away. Many AI-generated maps will suggest a clear and direct route that bears little resemblance to the messy tangles of reality. On even moderately complex legal journeys, the unwelcome and the unexpected are always lurking. Something arises that was nowhere on the map, and until it gets resolved, the client can’t move any further towards their destination.


Below are some items from Jordan’s article — or by following a rabbit trail from his posting:


AI-Native Firms, Built by Private Equity, Will Strain Legacy Model — from news.bloomberglaw.com by Eric Dodson Greenberg

The emergence of AI-native law firms reveals the limits of a fixed binary that has characterized the legal market over the last year.

The straightest path to AI law firms isn’t innovation within the legacy model, or capital investing around it, but external capital being deployed to build competitors to legacy firms. These firms use AI and narrow regulatory openings to create from scratch tech-enabled law firms.

Not acquire them. Not invest around them.

Build them.

This third path is no longer theoretical.

The $3,500 Hour vs. The $500 Contract — from legaltechnologyhub.com by Brandi Pack

While rates at the top continue climbing, the operational foundation of legal work is being rebuilt.

Its pricing reflects that structure. Contract review between three and 50 pages costs $500. Short agreements are $250. Longer contracts are billed per page. Drafting from scratch is offered at a fixed fee. 

There is no running clock.

The premise is straightforward. If generative AI materially reduces the time required for standardized work, the cost base changes. And when the cost base changes, pricing models eventually follow.

.



From DSC:
This next item is not from Jordan, but may also be useful to some of you out there:

Want to Work at Legora, Harvey or Another Legal AI Startup? — from legallydisrupted.com by Zach Abramowitz
Podcast with a Biglaw Partner Who Now Occupies a Senior Role at Legora

In Episode 45 of Zach Abramowitz is Legally Disrupted, Kyle and dive into why building tech workflows and writing AI prompts should absolutely be considered billable work. We also explore why AI commoditizing the legal “grinders” and “minders” means old-school social skills are about to become your single biggest competitive advantage. Finally, Kyle goes into great detail about how exactly how he landed a top role at Legora and how others can do the same (hint: merely dropping your resume into a web portal is not enough).


 

 

The quest to build a better AI tutor — from hechingerreport.org by Jill Barshay
Researchers make progress with an older ed tech idea: personalized practice

One promising idea has less to do with how an AI tutor explains concepts and more with what it asks students to practice next.

A team at the University of Pennsylvania, which included some AI skeptics, recently tested this approach in a study of close to 800 Taiwanese high school students learning Python programming. All the students used the same AI tutor, which was designed not to give away answers.

But there was one key difference. Half the students were randomly assigned to a fixed sequence of practice problems, progressing from easy to hard. The other half received a personalized sequence with the AI tutor continuously adjusting the difficulty of each problem based on how the student was performing and interacting with the chatbot.

The idea is based on what educators call the “zone of proximal development.” When problems are too easy, students get bored. When they’re too hard, students get frustrated. The goal is to keep students in a sweet spot: challenged, but not overwhelmed.

The researchers found that students in the personalized group did better on a final exam than students in the fixed problem group. The difference was characterized as the equivalent of 6 to 9 months of additional schooling, an eye-catching claim for an after-school online course that lasted only five months.

To address this, Chung’s team combined a large language model with a separate machine-learning algorithm that analyzes how students interact with the online course platform — how they answer the practice questions, how many times they revise or edit their coding, and the quality of their conversations with the chatbot — and uses that information to decide which problem to serve up next.

 

Why Educators Must Become AI Literate, And How to Start  — from edmentum.com by Priten Soundar-Shah

Much of our focus these few years has been spent helping students learn how to use AI responsibly, especially to combat cheating and plagiarism, and also with consideration given to productive learning, critical thinking, and online safety. But we are still behind on building fundamental literacy for teachers. Recent data supports this literacy gap. For example, Microsoft Education found that 80% of teachers say they are using AI, but 60% have received no or little training. We cannot continue to expect teachers to build student AI literacy without defining what success looks like for educator literacy, and there we’re falling short.

In some instances, AI literacy in the classroom is being defined as the ability to use chat tools to produce some sort of outcome. By that standard, we’re doing much better than we were three years ago. Students and teachers are increasingly turning to AI tools to produce study aids, outlines, drafts, and other content. And, some schools do provide training that is often concentrated on a particular vendor’s tool and how to use it effectively in the classroom.

However, we are leaving out the training that is necessary to help educators learn how to decide when to use or not use the technology and what the implications of that are. For example, I’ve spoken to teachers who have access to a variety of AI tools, have received training on how to use them, but still don’t incorporate them into their workflow, because they don’t know if it’s “right.”

 

AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Google expands Search Live globally with voice and camera AI — from digitaltrends.com by Varun Mirchandani
The feature is now available in 200+ countries with multilingual support

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

.

 

Legal AI Access at 83%, But Trust Issues Remain — from artificiallawyer.com

A new survey of over 200 inhouse and law firm leaders provides solid evidence that while AI tools are now ‘standard’ across our sector, that trust in AI outputs fundamentally drives usage, along with ROI – and vice versa.

The data, from ALSP Factor, shows that 83% had ‘broad AI access’, which is up from 61% in 2025, and in itself is a very positive development that tells us legal AI is now becoming ubiquitous for commercial lawyers, with around 54% using such tools ‘often’.

 

From DSC:
I have been proposing that the AI-based learning platform of the future will be constantly doing this — every single day. It will know what the in-demand skills are — at any given moment in time. It will then be able to direct you to resources that will help you gain those skills. Though in my vision, the system is querying actual/open job descriptions, not analyzing learning data from enterprise learners. Perhaps I should add that to the vision.


Coursera’s Job Skills Report 2026: Top skills for your students — from coursera.org

The Job Skills Report 2026 analyzes learning data from more than 6 million enterprise learners to identify the future job skills organizations need most. It’s designed for HR and L&D leaders; data, IT, and software & product development leaders; higher education administrators; and government agencies seeking actionable insights on workforce skills trends and AI-driven transformation.

Drawing on data from 6 million enterprise learners across nearly 7,000 organizations, the Job Skills Report 2026 guides you through the skills reshaping the global economy. This year’s analysis spans Data, IT, and Software & Product Development—and the Generative AI skills becoming essential for every role.

 
 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 
 
© 2025 | Daniel Christian