Let AI Interview You — from wondertools.substack.com by Jeremy Caplan & Jay Dixit
A smarter way to get past the blank page

There’s nothing wrong with using AI to get answers to your questions. But there’s another mode of interacting with AI that many people never consider — one I find much more useful for my creative process.

Here’s what I do instead: I flip the script and let the AI ask the questions. Instead of prompting AI, I get the AI to prompt me.

 

Nvidia just invested in the AI legal startup that’s splashing Jude Law ads everywhere — from cnbc.com by Kai Nicol-Schwarz

Key Points

  • Nvidia has backed Swedish AI legal tech Legora in a $50 million Series D extension, CNBC can reveal.
  • The chip giant has been ramping up startup investments in recent years.
  • Investors have been piling into to promising young AI companies as they bet big on the commercial potential of tech to reshape entire industries and bring big efficiency gains.

Legora is its first bet in the legal tech sector, according to Dealroom data.

The AI startup is building AI agents and tools to help lawyers automate and streamline workflows. 

 
 

Why Sal Khan’s AI revolution hasn’t happened yet, according to Sal Khan — from chalkbeat.org by Matt Barnum

Three years ago, as Khan Academy founder Sal Khan rolled out an AI-powered tutoring chatbot, he predicted a revolution in learning.

So far, the revolution hasn’t happened, he acknowledges.

“For a lot of students, it was a non-event,” Khan told me recently about his eponymous chatbot, Khanmigo. “They just didn’t use it much.”

Khan gives this analogy: Imagine he walked into a class, sat in the back of the room, and waited for students to seek out help. “Some will; most won’t,” he said. That’s been the experience with AI tutoring, he said. It doesn’t necessarily make students motivated to learn or fill in gaps in knowledge needed to ask questions.

“AI is going to help,” said Khan of this reimagined Khan Academy. “But I think our biggest lever is really investing in the human systems.”

 

AI for Your Next Career Move — from wondertools.substack.com by Jeremy Caplan
Free tools to explore, research, and interview better

AI tools can serve as patient assistants when you’re looking for a job. Use them to organize your search. Or to challenge your assumptions about potential jobs. They can also help you present your strengths more persuasively. When you’re changing fields, or trying to move up, AI can help you stand out.

1. Visualize Your Career Options
Try: Google’s
Career Dreamer

What it is: A free tool for exploring jobs adjacent to yours. See a map of professional fields related to your interests.

How to use it: Start by typing in a current or previous role. Or name a job that interests you. Use up to five words. You can also name a specific organization or industry, if you have one in mind.

Career Dreamer asks what work activities interest you, then maps related career paths. Pick one at a time to explore.

You can then browse actual job openings. Refine the search based on location, company size, or other factors you care about.

 

Make learning accessible to all in higher education — from The Times Higher Education

When accessibility is placed at the heart of teaching and learning, rather than treated as a bolt-on, every student benefits. This week’s spotlight guide offers advice on designing universally accessible learning, in-person and online. Find out how to ease the burden of disability disclosure with universal design for learning, better support neurodivergent students and students with hearing or vision issues, design more accessible assessments and ensure digital tools work for all.

 

 

You Can’t Future-Proof Your Career From AI, But You Can Do This — from builtin.com by Liz Tran
Agility has become the most important skill to cultivate in today’s job market. Here’s how to get started.

Summary: Job seekers facing future panic should prioritize agility over information consumption. Build it by focusing on 30-day action experiments, reframing resumes around durable skills like problem-solving and embracing uncertainty through stretch applications and real-world feedback.

The antidote is what I call AQ — the agility quotient — which is your capacity to face change, disappointment and uncertainty without losing your footing. Unlike IQ, which measures what you know, AQ measures how fast you adapt when the rules change. Right now, it’s the most important career asset you have. Here’s how to build it.

What Is Agility Quotient (AQ)?
AQ is a measure of an individual’s capacity to adapt quickly when rules, industries or circumstances change. Unlike IQ, which focuses on existing knowledge, AQ emphasizes the ability to face uncertainty and disappointment without losing one’s footing, prioritizing action and iteration over exhaustive planning.

 

The Course Is Dying as the Unit of Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Here’s why, and what’s replacing It

What the Bleeding Edge Looks like in Practice
So what does “the new stack” actually look like when organisations lean into this? Here are four real patterns already in play.

Engineering: from engine courses to in-workflow AI coaching.
Product development: from courses to craft-specific agents.
Compliance: from annual course to nudge systems.|
Enablement systems, not catalogues.

 

Google expands Search Live globally with voice and camera AI — from digitaltrends.com by Varun Mirchandani
The feature is now available in 200+ countries with multilingual support

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

.

 
 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 

Across the divide: reimagining faculty-staff collaboration in higher education — from timeshighereducation.com by Saskia van de Gevel
Academic units do best when they harness different viewpoints – from field scientists and curriculum designers to extension professionals – to drive innovation and relevance. Saskia van de Gevel offers proactive advice

Universities are not sustained by individual leaders or isolated units. They are sustained by teams of people who bring different kinds of expertise to a shared mission. When faculty and professional staff collaborate as genuine partners – aligned around outcomes, clear about roles and committed to mutual respect – institutions become more resilient, innovative and effective.

Also from timeshighereducation.com, see:

Again, we don’t send them 200 CVs. We might send 20, but they’re meticulously shortlisted. The employer saves time, the student feels they are being taken seriously and trust builds quickly on both sides.

And because we work closely with employers, we learn something universities often struggle to find out early enough: what the market is asking for now.

What academics need to know: we can’t do this without you
If I could say one thing to academic colleagues anywhere, it’s that employability can’t sit next to the curriculum. It has to live with it.

 

5 Tech Strategies to Enhance Student-Led Learning — from edutopia.org by Rachelle Dené Poth
While technology has potential to distract students, it can also boost engagement and help them actively demonstrate their learning.

Over the years, I have learned that engagement doesn’t happen simply by adding technology. It increases when we give students more ownership by designing experiences that allow them to build, collaborate, reflect, and teach one another. Depending on how we use it, technology can either amplify engagement or distract from it. Technology can help build students’ confidence in learning, but it can also lead to passivity. When technology is used to amplify students’ voice, choice, and ownership in learning, their engagement will naturally increase.

Here are five strategies and some digital tools that can be used across grade levels and content areas to boost student engagement, build confidence, foster collaboration, and support meaningful learning experiences.


Project-Based Learning (PBL)
Implementing a PBL Design Challenge in Your School — from edutopia.org by Lisa Beck & Kim Mishkin
A weeklong, schoolwide project-based learning challenge encourages students to try to tackle meaningful problems.

For the past five years, Hudson Lab School (HLS), a K–8 progressive school committed to project?based learning (PBL), has kicked off each school year with an exciting tradition: Design Challenge Week. In five days, students take on a real?world problem, explore each phase of the design process, and present what they created and learned to an authentic audience. Design Challenge Week introduces concepts that students will revisit all year and offers a model for how any educational setting could experiment with PBL on a smaller scale. Even short, well?designed challenges can lead to deeply engaged learning experiences.


How to Give Students Directions They Actually Understand — from edutopia.org by Mary Davenport
Making small changes in your instructions can have a significant impact on students’ understanding and engagement.

No more than a minute after you’ve provided instruction on the day’s targeted content and given students directions for their next task, some brave soul utters the line that brings tired teachers to their knees: “What are we supposed to be doing?”

None of us want this. As teachers, we all want students to fully understand what they’re supposed to be doing so that they can be successful as they do it.

Good news: A few small changes in how we give directions can be the lever that boosts student understanding and engagement.

 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 
© 2025 | Daniel Christian