This $10K AI School Promises to Future-Proof Your Career — from builtin.com by Matthew Urwin
Khan Academy, TED and ETS are starting a new program to equip students and professionals with the skills to thrive in an increasingly AI-driven economy. Here’s what you need to know.

Summary: The Khan TED Institute is a higher-education program that will teach students and workers how to use AI through interactive learning. The program’s AI-centric curriculum is an unproven approach, though, casting doubt on whether it will actually improve learning outcomes and career prospects.

Higher education might be on the verge of a radical overhaul to bring it up to speed in the age of artificial intelligence. At the TED2026 conference, Khan Academy, TED and ETS announced that they’re partnering to establish the Khan TED Institute — a new program that reorients the college curriculum around AI. By joining forces, the education technology trio aims to develop an alternative to traditional universities that better tracks student progress, teaches more relevant skills and provides a more personalized learning experience.

Accessibility is another major tenet of the Khan TED Institute. Its virtual nature allows anyone with an internet connection to participate in the program and makes it easier for students to move at their preferred pace. And because its curriculum prioritizes competency over course credits, advanced learners can complete the program in a shorter period. Time isn’t the only thing students can save on, either: The Institute promises a bachelor’s degree for less than $10,000, offering a much more affordable alternative to the typical four-year degree. 


 

From DSC:
Faculty senates don’t do well with this pace of change. But to their credit, few organizations can begin to deal with this pace of change.

 

When anyone can build a course, the real job is deciding which ones shouldn’t exist — from drphilippahardman.substack.com by Dr. Philippa Hardman
Why deciding is the only L&D skill AI can’t replace.

The biggest AI risk that L&D faces isn’t that it gets left behind: it’s that we build more — and flood the organisation with meh-quality content nobody needed in the first place.

In this post, I’ll make the case that:

  • The L&D job has just split in two — and most of us are still working on the wrong half.
  • There’s a new operating model coming for the role, and it’s already running inside a lot of the companies you’ve heard of.
  • The smartest critique of everything I’m about to argue comes from Ethan Mollick — and I think he’s half right.

The question we’ve been asking for the last two years — “how do I get faster at building?” — was the wrong one.

The real question is: can I look at fifteen AI-generated learning assets and decide which three are worth scaling — and put my name to that decision?

 
 

Let AI Interview You — from wondertools.substack.com by Jeremy Caplan & Jay Dixit
A smarter way to get past the blank page

There’s nothing wrong with using AI to get answers to your questions. But there’s another mode of interacting with AI that many people never consider — one I find much more useful for my creative process.

Here’s what I do instead: I flip the script and let the AI ask the questions. Instead of prompting AI, I get the AI to prompt me.

 

Nvidia just invested in the AI legal startup that’s splashing Jude Law ads everywhere — from cnbc.com by Kai Nicol-Schwarz

Key Points

  • Nvidia has backed Swedish AI legal tech Legora in a $50 million Series D extension, CNBC can reveal.
  • The chip giant has been ramping up startup investments in recent years.
  • Investors have been piling into to promising young AI companies as they bet big on the commercial potential of tech to reshape entire industries and bring big efficiency gains.

Legora is its first bet in the legal tech sector, according to Dealroom data.

The AI startup is building AI agents and tools to help lawyers automate and streamline workflows. 

 
 

The TalentLMS 2026 Annual L&D Benchmark Report — from talentlms.com
From year-over-year training benchmarks to learner–leader gaps, see the data that defines the new era of learning. To turn insight into action, the report lays out 10 evidence-backed interventions to hardwire development. Plus, lift the lid on Learning Debt: What it is and how to spot it.

Executive summary
The skills economy is being rewritten in real time. AI is reshaping what people need to know, do, and deliver, faster than organizational structures can adapt. The result is a workplace caught between acceleration and inertia. Companies are racing to reskill for an AI-driven future while relying on structures built for yesterday’s world.

This TalentLMS 2026 L&D Benchmark Report captures that inflection point. Based on data collected through 2025, and compared with earlier findings from 2022 to 2024, it explores how learning is evolving and what’s holding it back.

Our research integrates two vantage points: HR leaders overseeing learning initiatives and employees receiving formal training. Together, they offer a dual perspective on how learning is managed and how it’s experienced.

The analysis also draws on insights from external research and leading L&D practitioners, anchoring the report in both evidence and practice.

Combined, the findings point to a structural fault line: Learning is expanding in scope but contracting in space. Organizations are multiplying programs, tools, and ambitions, yet the conditions for learning — time, focus, and cognitive bandwidth — keep shrinking.

The data from this report underscores this critical conflict: According to half of the surveyed employees and learning leaders, high workloads leave little room for training, even when it’s needed.

Employees work inside a permanent sprint, where attention is fragmented and reflection is sidelined. The space for learning is collapsing under the weight of doing. Sixty-five percent of employees say performance expectations have risen this year, yet lack of time remains the biggest barrier to learning.

The numbers confirm what employees and learning leaders both feel: Technology can advance overnight. But people and cultures can’t.

 

FutureFit AI Announces Strategic Investment to Help Governments and Industries Navigate AI’s Impact on People & Jobs — from prnewswire.com; via Ryan Craig

NEW YORKApril 13, 2026 /PRNewswire/ — FutureFit AI, a global leader in AI-powered workforce development technology, today announced an investment from Achieve Partners, led by investor and author Ryan Craig,  to accelerate its mission of helping more people navigate to better jobs faster and cheaper at scale.

“For too long, the U.S. workforce system has relied on disparate and disconnected systems to try to bridge the gap between the skills workers bring to the table, and the jobs available in a fast-changing labor market. In the age of AI, the need for a better approach has only become more urgent,” said Ryan Craig, co-founder and managing director of Achieve and author of Apprentice NationA New U, and College Disrupted. “FutureFit AI is solving that problem by helping workforce organizations create clearer paths to career opportunity for workers and solve pressing talent gaps that hinder economic growth. Their work around the country has already demonstrated the ability to help more people get good jobs faster.”

“A mission that began with a simple question of ‘What if everyone had a GPS for their career’ has turned into years of working closely with government and industry leaders to respond to – and solve for – the impacts of digital transformation and AI on jobs and people,” added Ekhtiari. “Our partnership with Achieve will accelerate our work to build and scale the missing workforce transition infrastructure that our country and the world so badly need at this moment.”

 

Recording at LegalWeek in New York, Zach sits down with Shlomo Klapper (founder of Learned Hand) and Bridget McCormack, former Chief Justice of the Michigan Supreme Court and now CEO of the American Arbitration Association, to challenge one of the biggest double standards in legal AI: “AI for me, but not for thee.” Lawyers are now widely using AI like #Harvey and #Legora — and now more than ever #claude — but the moment it touches judges or arbitrators, support drops off.

That hesitation comes as courts are under real strain, with judges handling thousands of cases a year and only minutes to decide each one, and no realistic way to keep up. Shlomo describes Learned Hand’s “AI law clerk,” built to support judicial research, analysis, and drafting, while Bridget brings the perspective of someone who has both made decisions on the bench and has pioneered the American Arbitration Association’s AI Arbitrator, a first of its kind. The conversation moves beyond AI as an assistant and into a harder shift: AI as part of decision-making itself, and whether the system can continue to function without it.


Also see:

Are Judges the Next To Adopt AI? Is That a Good Thing? — from legallydisrupted.com by Zach Abramowitz
Episode 46 of Legally Disrupted Has the Two Best Experts on the Topic

This brings us to an admitted, glaring double standard between lawyers and judges. Lawyers are totally fine with lawyers using AI, but those same lawyers become apoplectic at the thought of judges or arbitrators using AI. It is very much “AI for me, but not for thee.” A survey last year from White & Case and Queen Mary University of London School of Law showed that nearly 90% of lawyers were deeply supportive of AI for their own research and analytics, but that support drops to just 23% when it comes to a judge or arbitrator using it to make a decision.

Yet, despite that hullabaloo, there is a massive need for alternative forms of intelligence in our courts. Right now, the system is drowning. We have state court trial judges disposing of 2,500 cases a year, meaning they have barely half an hour to spend on a single case. We are simply not going to lawyer our way out of this 50-year backlog. If we just use humans, we have a massive demand for intelligence but a severely limited supply. AI could step in to give these judges the capacity they desperately need for the courts to actually function.

 

An Attack on Sam Altman Sends a Terrifying Message — from the nytimes.com; this is a gifted opinion article by Aaron Zamost

Lawless political violence landed on Silicon Valley’s doorstep this month when an attacker hurled a Molotov cocktail at the San Francisco compound of Sam Altman, OpenAI’s chief executive. The incident was a disturbing sign that simmering public anger about A.I. is spilling out of polling data and social media posts and into the real world.

The attack shook many tech employees, who in quiet conversations about safety wondered whether this was a watershed moment for the industry. I believe it should be — the whole thing is disturbing and jarring, but I’m hopeful it will change how some tech leaders deal with the societal consequences of their success.

If these companies sold food, cars, medicine or any other consumer goods, their products would almost certainly be recalled while federal regulators investigated the allegations.

You would think an industry creating this kind of outrage would reflect or recalibrate. Business experts teach us that companies facing customer backlash should acknowledge the failure, change their approach and earn back public trust. But the titans of tech no longer seem interested in convincing the public.

The foundation of Silicon Valley’s appeal has always been the implicit promise that great technology serves you, and that the people behind it understand your problems and want to solve them. That promise is starting to feel broken. Fixing it requires something much of Silicon Valley has forgotten how to do: listen and learn.

A Molotov cocktail is the absolute wrong way to send a message to tech. Its leaders need to hear it anyway.

 
 

The Role of Faculty in the University of the Future — from er.educause.edu by Tanya Gamby, David Kil, Rachel Koblic, Paul LeBlanc, Mihnea Moldoveanu, and George Siemens
In the age of AI, the true future of higher education lies not in replacing faculty but in freeing them to do what only humans can—build meaningful relationships, cultivate wisdom, and guide students through the ethical and intellectual challenges machines cannot navigate.

Today, the work of knowledge transfer is often done better, faster, with more precision, and more patiently by AI. These systems can provide nonjudgmental, individualized learning opportunities twenty-four hours a day, seven days a week. Think of AI as a “genius teaching assistant” who assumes much of the work of basic knowledge transfer, unlocking learning when students get stuck and providing real-time assessment. Such a genius TA would offer faculty dashboards that update student progress, flag those who are struggling, and recommend targeted interventions. These tasks free faculty to focus on building genuine relationships with students, using the classroom to foster human skills, and curating community. This may be the great gift of AI to education. But it requires a profound reimagining of faculty roles—perhaps the single biggest hurdle to reimagining higher education, and equally its greatest opportunity.

A concerned faculty member might hear all this and conclude they are becoming obsolete. The opposite is true. The evolution of faculty roles demands more—not less—of what makes a great teacher.

This means intervening in high-impact moments when the genius TA has not unlocked learning; curating class time to lift students from knowing material to applying it in contexts that require critical thinking, judgment, and discernment; and cultivating the human skills that will be most prized in the age of AI: effective communication, constructive dialogue, empathy, creativity, and professional disposition. Most importantly, it means building genuine relationships with students—that make them feel like they matter—the kind that fuels transformation.


From DSC:
A quick comment on one of the sentences in the article, which asserts:

Centers for teaching and learning, which have long supported faculty development at many institutions, will be among the busiest places on campus in the years ahead.

I would change the word will be to should:

Centers for teaching and learning, which have long supported faculty development at many institutions, should be among the busiest places on campus in the years ahead.

For that statement to be true, centers for teaching and learning need to be well-versed in the tools and pedagogies involved, plus in learning science. Those centers need to have credibility for faculty members to value their services. And that’s just it, isn’t it? The faculty members need to see those centers for teaching and learning as having something that they lack…that they need assistance with. Otherwise, if such centers are just viewed as superfluous, nothing much will change.

Also, my experience has been that if those centers for teaching and learning are in an IT group/department, they should be moved to the academic side of the house instead. Many faculty members don’t value people from IT enough to make changes in how they teach — no matter how qualified those people are. They view those people as “IT” only.


You might also be interested in the other articles in that series:


 

AI for Your Next Career Move — from wondertools.substack.com by Jeremy Caplan
Free tools to explore, research, and interview better

AI tools can serve as patient assistants when you’re looking for a job. Use them to organize your search. Or to challenge your assumptions about potential jobs. They can also help you present your strengths more persuasively. When you’re changing fields, or trying to move up, AI can help you stand out.

1. Visualize Your Career Options
Try: Google’s
Career Dreamer

What it is: A free tool for exploring jobs adjacent to yours. See a map of professional fields related to your interests.

How to use it: Start by typing in a current or previous role. Or name a job that interests you. Use up to five words. You can also name a specific organization or industry, if you have one in mind.

Career Dreamer asks what work activities interest you, then maps related career paths. Pick one at a time to explore.

You can then browse actual job openings. Refine the search based on location, company size, or other factors you care about.

 

The “Cognitive Offloading” Paradox — from drphilippahardman.substack.com by Dr. Philippa Hardman
New research shows that offloading learning tasks to AI can improve – rather than erode – human thinking and learning

The Rise of the “Offloading Paradox”
In March 2026, the International Journal of Educational Technology in Higher Education published a study that went beyond the question “does offloading hurt?” and asked a harder one: when students form genuine partnerships with AI — treating it as an intellectual collaborator rather than a passive tool — what actually happens to the way they think and learn? Specifically, do two cognitive responses — critical evaluation of AI outputs (what the researchers call cognitive vigilance) and strategic delegation to AI (cognitive offloading) — compete with each other, or can they coexist?

Based on previous research, Wang and Zhang hypothesised that cognitive offloading would hurt transformative learning. They expected the familiar story: delegation reduces cognitive struggle, struggle is where learning happens, therefore delegation undermines learning.

The study — 912 students across China, Europe, and the United States, using a three-wave time-lagged survey design that measured partnership orientation first, cognitive strategies two weeks later, and learning outcomes two weeks after that — found something more interesting than a simple reversal.

 

Which Jobs Are Most at Risk From AI? New Anthropic Data Offers Clues. — from builtin.com by Matthew Urwin
Anthropic set out in its latest study to predict how artificial intelligence could impact the labor market. Instead, its findings raise more questions than answers for tech workers as the U.S. government refuses to regulate the AI industry.

Summary:
In its latest labor market study, Anthropic found that artificial intelligence poses the greatest threat to software jobs, women and younger professionals. As the Trump administration takes a hands-off approach to AI, tech workers may be left to grapple with these findings on their own.


Matthew links to:

Labor market impacts of AI: A new measure and early evidence — from anthropic.com

Key findings

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what’s feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations

 
© 2025 | Daniel Christian