Let AI Interview You — from wondertools.substack.com by Jeremy Caplan & Jay Dixit
A smarter way to get past the blank page

There’s nothing wrong with using AI to get answers to your questions. But there’s another mode of interacting with AI that many people never consider — one I find much more useful for my creative process.

Here’s what I do instead: I flip the script and let the AI ask the questions. Instead of prompting AI, I get the AI to prompt me.

 

6 Reasons Universities Are Building Media Labs Now — from edtechmagazine.com by Brad Grimes
Digital production centers help institutions close the gap between academic training and professional practice.

Higher education is undergoing a significant transformation in how it prepares the next generation of media professionals. Across the country, universities are investing in state-of-the-art media labs — facilities built not around traditional classroom instruction, but around the tools, workflows and collaborative environments that define today’s professional production landscape. These spaces represent a fundamental rethinking of what it means to train students for careers in film, animation, gaming and digital storytelling.

 

Nvidia just invested in the AI legal startup that’s splashing Jude Law ads everywhere — from cnbc.com by Kai Nicol-Schwarz

Key Points

  • Nvidia has backed Swedish AI legal tech Legora in a $50 million Series D extension, CNBC can reveal.
  • The chip giant has been ramping up startup investments in recent years.
  • Investors have been piling into to promising young AI companies as they bet big on the commercial potential of tech to reshape entire industries and bring big efficiency gains.

Legora is its first bet in the legal tech sector, according to Dealroom data.

The AI startup is building AI agents and tools to help lawyers automate and streamline workflows. 

 
 

Hebrews 7:25

Therefore he is able to save completely those who come to God through him, because he always lives to intercede for them.

Psalm 4:2

How long will you people turn my glory into shame?
How long will you love delusions and seek false gods?
.

.

1 Corinthians 10:23-24

23 “I have the right to do anything,” you say—but not everything is beneficial. “I have the right to do anything”—but not everything is constructive. 24 No one should seek their own good, but the good of others.

Job 19:25

I know that my redeemer lives,
and that in the end he will stand on the earth.

 

The TalentLMS 2026 Annual L&D Benchmark Report — from talentlms.com
From year-over-year training benchmarks to learner–leader gaps, see the data that defines the new era of learning. To turn insight into action, the report lays out 10 evidence-backed interventions to hardwire development. Plus, lift the lid on Learning Debt: What it is and how to spot it.

Executive summary
The skills economy is being rewritten in real time. AI is reshaping what people need to know, do, and deliver, faster than organizational structures can adapt. The result is a workplace caught between acceleration and inertia. Companies are racing to reskill for an AI-driven future while relying on structures built for yesterday’s world.

This TalentLMS 2026 L&D Benchmark Report captures that inflection point. Based on data collected through 2025, and compared with earlier findings from 2022 to 2024, it explores how learning is evolving and what’s holding it back.

Our research integrates two vantage points: HR leaders overseeing learning initiatives and employees receiving formal training. Together, they offer a dual perspective on how learning is managed and how it’s experienced.

The analysis also draws on insights from external research and leading L&D practitioners, anchoring the report in both evidence and practice.

Combined, the findings point to a structural fault line: Learning is expanding in scope but contracting in space. Organizations are multiplying programs, tools, and ambitions, yet the conditions for learning — time, focus, and cognitive bandwidth — keep shrinking.

The data from this report underscores this critical conflict: According to half of the surveyed employees and learning leaders, high workloads leave little room for training, even when it’s needed.

Employees work inside a permanent sprint, where attention is fragmented and reflection is sidelined. The space for learning is collapsing under the weight of doing. Sixty-five percent of employees say performance expectations have risen this year, yet lack of time remains the biggest barrier to learning.

The numbers confirm what employees and learning leaders both feel: Technology can advance overnight. But people and cultures can’t.

 

Navid Baraty’s Atmospheric Photos Explore Contrasting Scales of Time — from thisiscolossal.com by Navid Baraty and Kate Mothes

When we consider that enormous metropolises like New York City and Chicago have only come into being within the past few hundred years, it’s impossible not to stand in awe of ancient cultural sites that have existed for millennia or geological features that expose millions—even billions—of years of the planet’s natural history. For Navid Baraty, the contrasts and tensions of contemporary urban life and timeless landscapes merge in otherworldly photographs.

Baraty’s series The Time Between juxtaposes cityscapes with dramatic terrain, from desert dunes to snow-capped mountains. The project revolves around images in which two distinct digital photographs converge in a composite, drawing on the film technique of double exposure and exploring ideas of permanence, presence, and the “space between different scales of time,” the artist says.

 

FutureFit AI Announces Strategic Investment to Help Governments and Industries Navigate AI’s Impact on People & Jobs — from prnewswire.com; via Ryan Craig

NEW YORKApril 13, 2026 /PRNewswire/ — FutureFit AI, a global leader in AI-powered workforce development technology, today announced an investment from Achieve Partners, led by investor and author Ryan Craig,  to accelerate its mission of helping more people navigate to better jobs faster and cheaper at scale.

“For too long, the U.S. workforce system has relied on disparate and disconnected systems to try to bridge the gap between the skills workers bring to the table, and the jobs available in a fast-changing labor market. In the age of AI, the need for a better approach has only become more urgent,” said Ryan Craig, co-founder and managing director of Achieve and author of Apprentice NationA New U, and College Disrupted. “FutureFit AI is solving that problem by helping workforce organizations create clearer paths to career opportunity for workers and solve pressing talent gaps that hinder economic growth. Their work around the country has already demonstrated the ability to help more people get good jobs faster.”

“A mission that began with a simple question of ‘What if everyone had a GPS for their career’ has turned into years of working closely with government and industry leaders to respond to – and solve for – the impacts of digital transformation and AI on jobs and people,” added Ekhtiari. “Our partnership with Achieve will accelerate our work to build and scale the missing workforce transition infrastructure that our country and the world so badly need at this moment.”

 

Why Sal Khan’s AI revolution hasn’t happened yet, according to Sal Khan — from chalkbeat.org by Matt Barnum

Three years ago, as Khan Academy founder Sal Khan rolled out an AI-powered tutoring chatbot, he predicted a revolution in learning.

So far, the revolution hasn’t happened, he acknowledges.

“For a lot of students, it was a non-event,” Khan told me recently about his eponymous chatbot, Khanmigo. “They just didn’t use it much.”

Khan gives this analogy: Imagine he walked into a class, sat in the back of the room, and waited for students to seek out help. “Some will; most won’t,” he said. That’s been the experience with AI tutoring, he said. It doesn’t necessarily make students motivated to learn or fill in gaps in knowledge needed to ask questions.

“AI is going to help,” said Khan of this reimagined Khan Academy. “But I think our biggest lever is really investing in the human systems.”

 

Recording at LegalWeek in New York, Zach sits down with Shlomo Klapper (founder of Learned Hand) and Bridget McCormack, former Chief Justice of the Michigan Supreme Court and now CEO of the American Arbitration Association, to challenge one of the biggest double standards in legal AI: “AI for me, but not for thee.” Lawyers are now widely using AI like #Harvey and #Legora — and now more than ever #claude — but the moment it touches judges or arbitrators, support drops off.

That hesitation comes as courts are under real strain, with judges handling thousands of cases a year and only minutes to decide each one, and no realistic way to keep up. Shlomo describes Learned Hand’s “AI law clerk,” built to support judicial research, analysis, and drafting, while Bridget brings the perspective of someone who has both made decisions on the bench and has pioneered the American Arbitration Association’s AI Arbitrator, a first of its kind. The conversation moves beyond AI as an assistant and into a harder shift: AI as part of decision-making itself, and whether the system can continue to function without it.


Also see:

Are Judges the Next To Adopt AI? Is That a Good Thing? — from legallydisrupted.com by Zach Abramowitz
Episode 46 of Legally Disrupted Has the Two Best Experts on the Topic

This brings us to an admitted, glaring double standard between lawyers and judges. Lawyers are totally fine with lawyers using AI, but those same lawyers become apoplectic at the thought of judges or arbitrators using AI. It is very much “AI for me, but not for thee.” A survey last year from White & Case and Queen Mary University of London School of Law showed that nearly 90% of lawyers were deeply supportive of AI for their own research and analytics, but that support drops to just 23% when it comes to a judge or arbitrator using it to make a decision.

Yet, despite that hullabaloo, there is a massive need for alternative forms of intelligence in our courts. Right now, the system is drowning. We have state court trial judges disposing of 2,500 cases a year, meaning they have barely half an hour to spend on a single case. We are simply not going to lawyer our way out of this 50-year backlog. If we just use humans, we have a massive demand for intelligence but a severely limited supply. AI could step in to give these judges the capacity they desperately need for the courts to actually function.

 

An Attack on Sam Altman Sends a Terrifying Message — from the nytimes.com; this is a gifted opinion article by Aaron Zamost

Lawless political violence landed on Silicon Valley’s doorstep this month when an attacker hurled a Molotov cocktail at the San Francisco compound of Sam Altman, OpenAI’s chief executive. The incident was a disturbing sign that simmering public anger about A.I. is spilling out of polling data and social media posts and into the real world.

The attack shook many tech employees, who in quiet conversations about safety wondered whether this was a watershed moment for the industry. I believe it should be — the whole thing is disturbing and jarring, but I’m hopeful it will change how some tech leaders deal with the societal consequences of their success.

If these companies sold food, cars, medicine or any other consumer goods, their products would almost certainly be recalled while federal regulators investigated the allegations.

You would think an industry creating this kind of outrage would reflect or recalibrate. Business experts teach us that companies facing customer backlash should acknowledge the failure, change their approach and earn back public trust. But the titans of tech no longer seem interested in convincing the public.

The foundation of Silicon Valley’s appeal has always been the implicit promise that great technology serves you, and that the people behind it understand your problems and want to solve them. That promise is starting to feel broken. Fixing it requires something much of Silicon Valley has forgotten how to do: listen and learn.

A Molotov cocktail is the absolute wrong way to send a message to tech. Its leaders need to hear it anyway.

 
 

The Role of Faculty in the University of the Future — from er.educause.edu by Tanya Gamby, David Kil, Rachel Koblic, Paul LeBlanc, Mihnea Moldoveanu, and George Siemens
In the age of AI, the true future of higher education lies not in replacing faculty but in freeing them to do what only humans can—build meaningful relationships, cultivate wisdom, and guide students through the ethical and intellectual challenges machines cannot navigate.

Today, the work of knowledge transfer is often done better, faster, with more precision, and more patiently by AI. These systems can provide nonjudgmental, individualized learning opportunities twenty-four hours a day, seven days a week. Think of AI as a “genius teaching assistant” who assumes much of the work of basic knowledge transfer, unlocking learning when students get stuck and providing real-time assessment. Such a genius TA would offer faculty dashboards that update student progress, flag those who are struggling, and recommend targeted interventions. These tasks free faculty to focus on building genuine relationships with students, using the classroom to foster human skills, and curating community. This may be the great gift of AI to education. But it requires a profound reimagining of faculty roles—perhaps the single biggest hurdle to reimagining higher education, and equally its greatest opportunity.

A concerned faculty member might hear all this and conclude they are becoming obsolete. The opposite is true. The evolution of faculty roles demands more—not less—of what makes a great teacher.

This means intervening in high-impact moments when the genius TA has not unlocked learning; curating class time to lift students from knowing material to applying it in contexts that require critical thinking, judgment, and discernment; and cultivating the human skills that will be most prized in the age of AI: effective communication, constructive dialogue, empathy, creativity, and professional disposition. Most importantly, it means building genuine relationships with students—that make them feel like they matter—the kind that fuels transformation.


From DSC:
A quick comment on one of the sentences in the article, which asserts:

Centers for teaching and learning, which have long supported faculty development at many institutions, will be among the busiest places on campus in the years ahead.

I would change the word will be to should:

Centers for teaching and learning, which have long supported faculty development at many institutions, should be among the busiest places on campus in the years ahead.

For that statement to be true, centers for teaching and learning need to be well-versed in the tools and pedagogies involved, plus in learning science. Those centers need to have credibility for faculty members to value their services. And that’s just it, isn’t it? The faculty members need to see those centers for teaching and learning as having something that they lack…that they need assistance with. Otherwise, if such centers are just viewed as superfluous, nothing much will change.

Also, my experience has been that if those centers for teaching and learning are in an IT group/department, they should be moved to the academic side of the house instead. Many faculty members don’t value people from IT enough to make changes in how they teach — no matter how qualified those people are. They view those people as “IT” only.


You might also be interested in the other articles in that series:


 

Some Scripture for us

Romans 5:10

For if, while we were God’s enemies, we were reconciled to him through the death of his Son, how much more, having been reconciled, shall we be saved through his life!

Romans 14:11

It is written:

“‘As surely as I live,’ says the Lord,
‘every knee will bow before me;
every tongue will acknowledge God.’”[a]

 
© 2025 | Daniel Christian