Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 

What the Future of Learning Looks Like in the Era of AI — from the Center for Academic Innovation at the University of Michigan, by Sean Corp

AI & the Future of Learning Summit brings industry, education leaders together to discuss higher education’s opportunity to lead, what students need, and what partnerships are possible

As artificial intelligence rapidly reshapes the nature of work and learning, speakers at the University of Michigan’s AI & the Future of Learning Summit delivered a clear message: higher education must take a leading role in defining what comes next.

One CEO of a leading educational technology company put it like this: “The only bad thing would be universities standing still.”

Universities must embrace their roles as providers of continuous, lifelong learning that evolves alongside technological change. 


This shift is already affecting early-career pathways. Employers are placing greater emphasis on experience, while traditional entry-level roles are becoming less accessible. There is often a gap between what a credential represents and the expectations of employers.

That gap is particularly evident in access to internships. Chris Parrish, co-founder and president of Podium, noted that millions of students compete for a limited number of internships each year, making it increasingly difficult to gain the experience employers demand.

“If you miss out on an internship, you’re twice as likely to be unemployed,” Parrish said. 

 

The Course Is Dying as the Unit of Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Here’s why, and what’s replacing It

What the Bleeding Edge Looks like in Practice
So what does “the new stack” actually look like when organisations lean into this? Here are four real patterns already in play.

Engineering: from engine courses to in-workflow AI coaching.
Product development: from courses to craft-specific agents.
Compliance: from annual course to nudge systems.|
Enablement systems, not catalogues.

 

The Campus Crisis No One’s Talking About — from linkedin.com by Jeff Selingo

Sports Betting Is Now a Campus-Wide Habit

The headline number: About 60% of 18-to-22-year-olds are engaging in sports betting, a figure that climbs to two-thirds among college students specifically, according to an NCAA-commissioned study.

  • “It’s sort of a learned behavior for them at a very young age,” Clint Hangebrauck, the NCAA’s managing director of enterprise risk management, told us on the latest episode of Future U. “I do think this could be the next big public health crisis that we’re facing as a country and particularly within higher ed.”
  • College-age individuals are 3x more likely to develop problematic gambling behaviors than the general population. Gambling often co-exists with other behaviors now prevalent in colleges, such as sleeplessness, binge drinking, drug use, anxiety and depression.

Gambling among college students isn’t confined to athletes. Rather, it’s embedded across campus life, and with athletes often most visible in Division III, where oversight is lighter. Gambling often coexists with—and can exacerbate—other student challenges, from mental health struggles to substance use. If this is the next public health issue on campus, it’s arriving without the same level of attention.


From DSC:
I don’t mean to be self-righteous here. But shame on the older adults who are promoting gambling in any fashion — marketing, advertising, sales, and/or whatever. It’s a cancer in our society, and it’s impacting our youth in a big way (and also older folks as well). I’m not a gambler, but I’m well acquainted with weakness. And the Bible confirms that we all are acquainted with weakness:

Isaiah 53:6

 We all, like sheep, have gone astray,
    each of us has turned to our own way;
and the Lord has laid on him
    the iniquity of us all.

The adults out there know it. We are well acquainted with our sins and shortcomings.

Parents want the best for their kids. They don’t want dangerous habits being formed in their children. “Coping skills” that are majorly busted, and can lead to incredibly negative events. And the parents don’t want these habits to be formed at colleges and universities across the nation.

I wish those involved with promoting gambling could be at the dinner tables, or in the bedrooms, or in the living rooms, or in the vehicles out there when a spouse finds out that the other spouse (or significant other) has gambled away a significant amount of the couple’s savings. They no longer have rainy-day funds. They can no longer pay their bills. They no longer have the college funds for their other kids. Emotions erupt, fights begin. Relationships are threatened — and divorces sometimes occur because of this issue/habit. 

So if you are involved with promoting gambling, consider reading this article from Jeff Selingo…then go take a long look in the mirror. 

 

The Most Obvious Fix in Education — from michelleweise.substack.com by Michelle Weise
The No-Brainer Nobody’s Doing 

We know what better learning looks like. We have known for a while.

Real problems. Real roles. Built-in conflict. Conditions that simulate the messiness of actual work. Reflection that asks not just what did you do but who are you becoming? These are not radical ideas. They are not untested theories. The research is clear, employers are asking for exactly this, and students consistently report that the closest they got to real work was the most valuable part of their education.

So why aren’t universities doing more of it?

That is the question worth sitting with — because the gap between what we know and what we do is not a knowledge problem. It is a design problem, an incentive problem, and if we’re being candid, a courage problem.

Because in the meantime, learners are paying the price. They graduate credentialed but untested. They enter labor markets that want proof of performance and experience, not transcripts. They lack the networks, the exposure, and the scar tissue that comes from navigating real work.


Also relevant, see:

The Apprenticeship (R)Evolution — from insidehighered.com by Sara Weissman and Colleen Flaherty
Once synonymous with hard hats and tool belts, apprenticeships are branching into health care, artificial intelligence, business services, advanced manufacturing and more.

Such programs also challenge stereotypes about apprenticeships—namely that they’re only in construction, an earn-and-learn catchall for traditionally apprenticeable occupations such as bricklayer, plumber, carpenter and electrician. In integrating robotics, automation, machining and logistics, the manufacturing development program is a bridge to understanding how apprenticeships are evolving to support some of the nation’s fastest-growing industries. These include advanced manufacturing, but also health care, information technology and other business services.

 

Building a Thriving Organizational Culture: Strategies for Success — from learningguild.com by Genevieve Caplette

Characteristics of Strong Culture
Although each organization’s culture is unique, strong cultures share several common traits. They communicate openly, maintain trust across all levels, and reinforce their values through daily actions rather than slogans. Recognition is frequent and meaningful. Collaboration is encouraged over competition, and employees feel psychologically safe expressing ideas or concerns. These cultures evolve as the organization grows, ensuring alignment between stated values and lived behavior.

How to Strengthen Culture
A thriving environment is built through everyday habits: transparent communication, active listening, constructive feedback, and ensuring employees have the resources to grow. Embedding values into hiring, onboarding, recognition, and decision-making reinforces culture at every level. Sustaining culture requires ongoing attention—listening regularly, adjusting to evolving needs, and ensuring leaders continue to model the behaviors the organization expects.

 

Meta, YouTube found negligent in landmark social media addiction trial — from by Ian Duncan
A Los Angeles jury awarded $3 million in compensation to a young woman who alleged she had become addicted to the platforms as a child.

A Los Angeles jury found social media giant Meta and video platform YouTube negligent in a landmark trial, awarding $3 million in compensation to a young woman who alleged she had become addicted to the companies’ platforms as a child.

The verdict came at the end of a month-long trial that featured testimony by Facebook founder Mark Zuckerberg and a day after a jury in New Mexico ordered Meta to pay $375 million in penalties for endangering children. The twin verdicts are signs that legal protections which for decades made tech companies seem almost impervious are beginning to crack, as lawyers accuse the platforms of putting addictive or otherwise harmful features into their platforms.

With the armor of Silicon Valley companies fractured, they will now have to size up their appetite for future courtroom battles. There are thousands more lawsuits waiting to be heard, with young internet users, parents, school districts and state attorneys general all seeking to hold the industry accountable.

 

 
 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 
 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

The Future of College in an AI World — from linkedin.com by Jeff Selingo
In today’s issue: The tension over AI in higher ed; application inflation continues and testing is back; what’s the future of the original classroom technology, the learning management system. 


Hundreds of higher ed and industry leaders gathered Tuesday for a summit
on AI and the future of learning at the University of Michigan.
.

Conversations like the one we had at Michigan this week are necessary, but the action rarely matches the ambition.

  • We say the humanities are the operating system of an AI world, yet students and parents don’t believe it. They’re voting with their feet toward STEM, business, and narrowly tailored majors they believe will lead to a job.
  • Meanwhile, colleges are quietly eliminating the very humanities degrees the panelists were championing, employers are cutting the entry rungs off the career ladder for new graduates, and as Podium Education co-founder Christopher Parrish reminded us yesterday, there’s a yawning gap between demand for experience and the internships that actually exist.


AI Music Generators: Teaching With These Catchy AI Tools — from techlearning.com by Erik Ofgang
AI music generators are getting better and better, and there are more applications in the classroom as a result.

Are All AI Music Generators More Or Less The Same?
No. After experimenting with a few various free ones, I found a wide range of quality with the same prompts.

Gemini is the only one I’d currently recommend. It’s user-friendly but limited and only creates 30-second clips. Other music generators could potentially outperform Gemini with prompt adjustments. The ones I tried did better with the instrumentals but struggled more with the lyrics, and that kind of defeated the purpose of the tool for me.


ChatDOC: Teaching With The AI Summarizing Tool — from techlearning.com by Erik Ofgang
ChatDOC lets users turn any PDF into an AI chatbot that can summarize the text, answer questions, and generate quizzes.

What Is ChatDOC?
ChatDOC is an AI designed to help users interact with PDFs of various types, be it research papers, short stories, or chapters from larger works. Users upload a PDF and then have the opportunity to “chat” with that document, that is speak with a chatbot that bases its answers off of the uploaded text.

ChatDOC can perform tasks such as provide a short summary, search for specific terms, explain the overall theme if it’s a work of literature, or unpack the science in a research paper.

Other similar tools are out there, but ChatDOC is definitely one of the better PDF readers I’ve used. Its free version is quick and easy-to-use, and delivers on its promise of providing an AI that can discuss a given document with users and even quiz them on it.


From AI access to workforce readiness — from chieflearningofficer.com by Johnny Hamilton, Amy Stratbucker, & Brad Bigelow
Is your workforce using the right tool with an outdated mindset and playbook? Why old playbooks fall short — and what learning leaders must do next.

The leadership opportunity
Organizations do not need to predict every future AI capability. They need systems that allow people to explore with curiosity, practice safely, reflect deeply and adapt continuously — starting with what they already have and extending as capabilities evolve.

For CLOs, this is a moment to lead from the center of change — designing workforce readiness that keeps pace with accelerating technology while making work more rewarding for employees and more valuable for the organization. That is how AI moves from the promise of transformation to demonstrated readiness and, ultimately, from promise to performance.


Addendums on 3/19/26:
How to Build Practice-Based Learning Activities with AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Four evidence-based methods for designing, building & deploying active learning activities with your favourite LLM

Most L&D teams are using AI to make content faster. The real opportunity is using it as a practice engine.

The Synthesia 2026 AI in L&D Report f2026 AI in L&D Report found that the fastest-growing areas of planned AI adoption aren’t in content creation — they’re in assessments and simulations (36%), adaptive pathways (33%), and AI tutors (29%). In other words: L&D teams are starting to realise that the most powerful use of AI isn’t producing learning materials. It’s creating environments where learners actually practise.

And you can build these right now — no dev team, no custom platform, no code. Each method below includes a prompt you can paste into your preferred AI tool to generate a working interactive prototype: a self-contained practice activity with a briefing screen, a live AI interaction, and a debrief — all running in the browser, ready to share with stakeholders or deploy to learners.

OpenAI Adds Interactive Math and Science Learning Tools to ChatGPT — from campustechnology.com by Rhea Kelly

Key Takeaways

  • ChatGPT adds interactive learning tools: OpenAI introduced interactive math and science visualizations that allow users to explore formulas, variables, and relationships in real time.
  • The tool currently covers over 70 core math and science topics and is aimed initially at high school and college-level learners.
  • Users can adjust variables, manipulate formulas, and immediately see how changes affect graphs and outcomes.
 

How to Get Consistent, On-Brand Course Images from Any AI Image Tool — from drphilippahardman.substack.com by Dr. Philippa Hardman
A 3-step workflow that works every time — whatever AI tool you’re using

Most designers try to describe their way to an image. That’s the wrong approach. The goal is to show the tool the world it should be working in, then give it the minimum it needs to place your subject inside that world.

Every long, over-specified prompt is a sign that your visual inputs aren’t doing enough work.

The fix is an 3-step process which gives you superpowers in AI image generation…


How AI Could Transform, or Replace, the LMS — from futureupodcast.com by Jeff Selingo, Michael Horn, and Matthew Pittinsky

Tuesday, March 10, 2026 – For 30 years now, colleges have relied on the Learning Management System, or LMS, as a key portal for professors and students to teach and learn. It’s a tool that has helped colleges adapt to online learning and bring digital tools to classroom teaching. But generative AI seems poised to disrupt the LMS. And it’s unclear whether the LMS will evolve—or be replaced altogether. For this episode, Jeff and Michael talk with a pioneer of the technology, Matthew Pittinsky, about the lessons of past moments of tech disruption like the smartphone and cloud computing and about what could be different this time. This episode is made with support from Ascendium Education Group.


Gemini, Explained — from wondertools.substack.com by Jeremy Caplan
5 features worth your time — tested and compared

Google’s AI, Gemini, has quickly become one of the AI tools I rely on most. It builds dashboards and creates remarkable infographics. It spins out comprehensive research reports in minutes that would once have taken days to assemble.

It’s improving every month. On March 13, Google announced Ask Maps, so you can query Gemini about things like “Which nearby tennis courts are open with lights so I can play tonight?” On March 10, Gemini added new integrations to build, summarize, and analyze your Google Docs, Sheets, and Slides.

In today’s post below: catch up on the Gemini features worth your time, candid comparisons with other AI tools, and answers to the questions I hear most.


How we’re reimagining Maps with Gemini — from blog.google
Ask Maps answers your real-world questions with a conversation, and Immersive Navigation makes your route more intuitive.

Today, Google Maps is fundamentally changing what a map can do. By bringing together the world’s freshest map with our most capable Gemini models, we’re transforming exploration into a simple conversation and making driving more intuitive than ever with our biggest navigation upgrade in over a decade.

Ask anything about any place
We’re introducing Ask Maps, a new conversational experience that answers complex, real-world questions a map could never answer before. Now you can ask for things like, “My phone is dying — where can I charge it without having to wait in a long line for coffee?” or “Is there a public tennis court with lights on that I can play at tonight?” Previously, finding this information meant lots of research and sifting through reviews. But now, you can just tap the “Ask Maps” button and get your questions answered conversationally, with a customized map to help you visualize your options.

 

The Rungs of the Career Ladder We Removed — from by Dr. Michelle Weise
On the slow, quiet disappearance of learning HOW to work

There used to be a time when starting a job meant being a little lost. You sat in on meetings you didn’t run. You watched someone else handle the difficult client, draft the tricky email, navigate the room when the room shifted. You made your first draft of something, and someone returned it bleeding red ink. And somehow — through the mess and the margin notes — you learned.

That time is vanishing.

In just the first seven months of 2025, generative AI adoption was linked to thousands of job cuts. But the headline number misses the quieter, more consequential story: it’s not just fewer jobs. It’s the disappearance of the work that teaches you how to work.

So here’s the uncomfortable question: if genAI is absorbing the entry-level doing, where does that formation happen now?

We have to answer that. Not theoretically. Practically. Because the ladder hasn’t disappeared — but we’ve removed the bottom rungs. And no employer is going to drop a newly minted graduate into a mid-career role and hope they figure it out.

 
© 2025 | Daniel Christian