Let AI Interview You — from wondertools.substack.com by Jeremy Caplan & Jay Dixit
A smarter way to get past the blank page

There’s nothing wrong with using AI to get answers to your questions. But there’s another mode of interacting with AI that many people never consider — one I find much more useful for my creative process.

Here’s what I do instead: I flip the script and let the AI ask the questions. Instead of prompting AI, I get the AI to prompt me.

 

6 Reasons Universities Are Building Media Labs Now — from edtechmagazine.com by Brad Grimes
Digital production centers help institutions close the gap between academic training and professional practice.

Higher education is undergoing a significant transformation in how it prepares the next generation of media professionals. Across the country, universities are investing in state-of-the-art media labs — facilities built not around traditional classroom instruction, but around the tools, workflows and collaborative environments that define today’s professional production landscape. These spaces represent a fundamental rethinking of what it means to train students for careers in film, animation, gaming and digital storytelling.

 

Nvidia just invested in the AI legal startup that’s splashing Jude Law ads everywhere — from cnbc.com by Kai Nicol-Schwarz

Key Points

  • Nvidia has backed Swedish AI legal tech Legora in a $50 million Series D extension, CNBC can reveal.
  • The chip giant has been ramping up startup investments in recent years.
  • Investors have been piling into to promising young AI companies as they bet big on the commercial potential of tech to reshape entire industries and bring big efficiency gains.

Legora is its first bet in the legal tech sector, according to Dealroom data.

The AI startup is building AI agents and tools to help lawyers automate and streamline workflows. 

 
 

The TalentLMS 2026 Annual L&D Benchmark Report — from talentlms.com
From year-over-year training benchmarks to learner–leader gaps, see the data that defines the new era of learning. To turn insight into action, the report lays out 10 evidence-backed interventions to hardwire development. Plus, lift the lid on Learning Debt: What it is and how to spot it.

Executive summary
The skills economy is being rewritten in real time. AI is reshaping what people need to know, do, and deliver, faster than organizational structures can adapt. The result is a workplace caught between acceleration and inertia. Companies are racing to reskill for an AI-driven future while relying on structures built for yesterday’s world.

This TalentLMS 2026 L&D Benchmark Report captures that inflection point. Based on data collected through 2025, and compared with earlier findings from 2022 to 2024, it explores how learning is evolving and what’s holding it back.

Our research integrates two vantage points: HR leaders overseeing learning initiatives and employees receiving formal training. Together, they offer a dual perspective on how learning is managed and how it’s experienced.

The analysis also draws on insights from external research and leading L&D practitioners, anchoring the report in both evidence and practice.

Combined, the findings point to a structural fault line: Learning is expanding in scope but contracting in space. Organizations are multiplying programs, tools, and ambitions, yet the conditions for learning — time, focus, and cognitive bandwidth — keep shrinking.

The data from this report underscores this critical conflict: According to half of the surveyed employees and learning leaders, high workloads leave little room for training, even when it’s needed.

Employees work inside a permanent sprint, where attention is fragmented and reflection is sidelined. The space for learning is collapsing under the weight of doing. Sixty-five percent of employees say performance expectations have risen this year, yet lack of time remains the biggest barrier to learning.

The numbers confirm what employees and learning leaders both feel: Technology can advance overnight. But people and cultures can’t.

 

FutureFit AI Announces Strategic Investment to Help Governments and Industries Navigate AI’s Impact on People & Jobs — from prnewswire.com; via Ryan Craig

NEW YORKApril 13, 2026 /PRNewswire/ — FutureFit AI, a global leader in AI-powered workforce development technology, today announced an investment from Achieve Partners, led by investor and author Ryan Craig,  to accelerate its mission of helping more people navigate to better jobs faster and cheaper at scale.

“For too long, the U.S. workforce system has relied on disparate and disconnected systems to try to bridge the gap between the skills workers bring to the table, and the jobs available in a fast-changing labor market. In the age of AI, the need for a better approach has only become more urgent,” said Ryan Craig, co-founder and managing director of Achieve and author of Apprentice NationA New U, and College Disrupted. “FutureFit AI is solving that problem by helping workforce organizations create clearer paths to career opportunity for workers and solve pressing talent gaps that hinder economic growth. Their work around the country has already demonstrated the ability to help more people get good jobs faster.”

“A mission that began with a simple question of ‘What if everyone had a GPS for their career’ has turned into years of working closely with government and industry leaders to respond to – and solve for – the impacts of digital transformation and AI on jobs and people,” added Ekhtiari. “Our partnership with Achieve will accelerate our work to build and scale the missing workforce transition infrastructure that our country and the world so badly need at this moment.”

 

Recording at LegalWeek in New York, Zach sits down with Shlomo Klapper (founder of Learned Hand) and Bridget McCormack, former Chief Justice of the Michigan Supreme Court and now CEO of the American Arbitration Association, to challenge one of the biggest double standards in legal AI: “AI for me, but not for thee.” Lawyers are now widely using AI like #Harvey and #Legora — and now more than ever #claude — but the moment it touches judges or arbitrators, support drops off.

That hesitation comes as courts are under real strain, with judges handling thousands of cases a year and only minutes to decide each one, and no realistic way to keep up. Shlomo describes Learned Hand’s “AI law clerk,” built to support judicial research, analysis, and drafting, while Bridget brings the perspective of someone who has both made decisions on the bench and has pioneered the American Arbitration Association’s AI Arbitrator, a first of its kind. The conversation moves beyond AI as an assistant and into a harder shift: AI as part of decision-making itself, and whether the system can continue to function without it.


Also see:

Are Judges the Next To Adopt AI? Is That a Good Thing? — from legallydisrupted.com by Zach Abramowitz
Episode 46 of Legally Disrupted Has the Two Best Experts on the Topic

This brings us to an admitted, glaring double standard between lawyers and judges. Lawyers are totally fine with lawyers using AI, but those same lawyers become apoplectic at the thought of judges or arbitrators using AI. It is very much “AI for me, but not for thee.” A survey last year from White & Case and Queen Mary University of London School of Law showed that nearly 90% of lawyers were deeply supportive of AI for their own research and analytics, but that support drops to just 23% when it comes to a judge or arbitrator using it to make a decision.

Yet, despite that hullabaloo, there is a massive need for alternative forms of intelligence in our courts. Right now, the system is drowning. We have state court trial judges disposing of 2,500 cases a year, meaning they have barely half an hour to spend on a single case. We are simply not going to lawyer our way out of this 50-year backlog. If we just use humans, we have a massive demand for intelligence but a severely limited supply. AI could step in to give these judges the capacity they desperately need for the courts to actually function.

 

An Attack on Sam Altman Sends a Terrifying Message — from the nytimes.com; this is a gifted opinion article by Aaron Zamost

Lawless political violence landed on Silicon Valley’s doorstep this month when an attacker hurled a Molotov cocktail at the San Francisco compound of Sam Altman, OpenAI’s chief executive. The incident was a disturbing sign that simmering public anger about A.I. is spilling out of polling data and social media posts and into the real world.

The attack shook many tech employees, who in quiet conversations about safety wondered whether this was a watershed moment for the industry. I believe it should be — the whole thing is disturbing and jarring, but I’m hopeful it will change how some tech leaders deal with the societal consequences of their success.

If these companies sold food, cars, medicine or any other consumer goods, their products would almost certainly be recalled while federal regulators investigated the allegations.

You would think an industry creating this kind of outrage would reflect or recalibrate. Business experts teach us that companies facing customer backlash should acknowledge the failure, change their approach and earn back public trust. But the titans of tech no longer seem interested in convincing the public.

The foundation of Silicon Valley’s appeal has always been the implicit promise that great technology serves you, and that the people behind it understand your problems and want to solve them. That promise is starting to feel broken. Fixing it requires something much of Silicon Valley has forgotten how to do: listen and learn.

A Molotov cocktail is the absolute wrong way to send a message to tech. Its leaders need to hear it anyway.

 

Google expands Search Live globally with voice and camera AI — from digitaltrends.com by Varun Mirchandani
The feature is now available in 200+ countries with multilingual support

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

.

 

Meta, YouTube found negligent in landmark social media addiction trial — from by Ian Duncan
A Los Angeles jury awarded $3 million in compensation to a young woman who alleged she had become addicted to the platforms as a child.

A Los Angeles jury found social media giant Meta and video platform YouTube negligent in a landmark trial, awarding $3 million in compensation to a young woman who alleged she had become addicted to the companies’ platforms as a child.

The verdict came at the end of a month-long trial that featured testimony by Facebook founder Mark Zuckerberg and a day after a jury in New Mexico ordered Meta to pay $375 million in penalties for endangering children. The twin verdicts are signs that legal protections which for decades made tech companies seem almost impervious are beginning to crack, as lawyers accuse the platforms of putting addictive or otherwise harmful features into their platforms.

With the armor of Silicon Valley companies fractured, they will now have to size up their appetite for future courtroom battles. There are thousands more lawsuits waiting to be heard, with young internet users, parents, school districts and state attorneys general all seeking to hold the industry accountable.

 

 

Legal AI Access at 83%, But Trust Issues Remain — from artificiallawyer.com

A new survey of over 200 inhouse and law firm leaders provides solid evidence that while AI tools are now ‘standard’ across our sector, that trust in AI outputs fundamentally drives usage, along with ROI – and vice versa.

The data, from ALSP Factor, shows that 83% had ‘broad AI access’, which is up from 61% in 2025, and in itself is a very positive development that tells us legal AI is now becoming ubiquitous for commercial lawyers, with around 54% using such tools ‘often’.

 
 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 
© 2025 | Daniel Christian