So much for saving the planet. Climate careers, and many others, evaporate for class of 2025 — from hechingerreport.org by Lawrence Lanahan
The Trump administration is disrupting career paths for new graduates hoping to work in climate and sustainability, international aid, public service and the sciences

As the class of 2025 enters the workforce, the Trump administration has dismantled career pathways for graduates interested in climate and sustainability work, international aid, public service and research across the natural, behavioral and social sciences. Federal jobs are disappearing, and the administration is eliminating grants and agency divisions that sustain university research programs and nonprofits that are crucial to launching careers.

The National Science Foundation, for example, halved graduate research fellowships, canceled some undergraduate research grants, stopped awarding new grants, froze funding for existing ones, and eliminated several hundred grants for focusing on diversity, equity and inclusion. In March, Robert F. Kennedy Jr. announced 10,000 layoffs at his agency, the Department of Health and Human Services; earlier buyouts and firings had already cut another 10,000 jobs.

 

American Microschools: A Sector Analysis 2025 — from microschoolingcenter.org by Don Soifer and Ashley Soifer

Among the report’s findings:

  • 74 percent of microschools have annual tuition and fees at or below $10,000, with 65 percent offering sliding scale tuition and discounts;
  • Among microschools that track academic growth data of students over time, 81 percent reported between 1 and 2 years of academic gains during one school year;
  • Children receive letter grades in just 29 percent of microschools, while observation-based reporting, portfolios, and tracking mastery are the most prevalent methods of tracking their impact;
  • The most important student outcomes for currently-operating microschools are growth in nonacademic learning, children’s happiness in their microschool, skills perceived as needed for future, and academic growth.
 

Why high performers make assertions: The difference between insights, suggestions, and assertions — from newsletter.weskao.com by Wes Kao; w/ thanks to Roberto Ferraro for this posting
An insight is just a starting point. The rare, courageous thing to do is to develop an assertion, i.e. a hypothesis and point of view that answers “so what?”

But the next step is what actually moves the needle. The rare, courageous thing to do is to develop an assertion.

What’s the difference between insights, suggestions, and assertions?

When you point out an insight, you’re calling attention to an observation, something you noticed and wanted to remark on. In response, your colleague could say, “Hmm interesting. That’s nice to know.” They carry on with their day. You carry on with yours. Nothing changes.

When you make a suggestion, you’re putting forth a recommendation. You’re proposing a few different options to choose from. But you’re still not on the hook because your boss ultimately decides what to do. And the person who decides holds the emotional burden of that decision.

When you make an assertion, all of a sudden, things get real. You’re on the hook because there’s more of you in what you’re positing. You’re now advocating for your point of view and trying to convince others to support you.


From DSC:
Perhaps there’s something in here for academics when they write for the journals within their discipline. When I was getting my Masters Degree, I hated readying the same ol’ same ol’ –> “…this needs further research, blah, blah, blah.”

I wanted to know what the researcher/author had to actually say about the topic. Too often, they seemed to hold back any kind of thesis or what they believed to be true about a topic. They were far too reserved in my opinion.


 

 

GPT, Claude, Gemini, Grok… Wait, Which One Do I Use Again? — from thebrainyacts.beehiiv.com by Josh Kubicki
Brainyacts #263

So this edition is simple: a quick, practical guide to the major generative AI models available in 2025 so far. What they’re good at, what to use them for, and where they might fit into your legal work—from document summarization to client communication to research support.

From DSC:
This comprehensive, highly informational posting lists what the model is, its strengths, the best legal use cases for it, and responsible use tips as well.


What’s Happening in LegalTech Other than AI? — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell

Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more.

Later, the guys answer a listener’s question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated.


LawDroid Founder Tom Martin on Building, Teaching and Advising About AI for Legal — from lawnext.com by Bob Ambrogi and Tom Martin

If you follow legal tech at all, you would be justified in suspecting that Tom Martin has figured out how to use artificial intelligence to clone himself.

While running LawDroid, his legal tech company, the Vancouver-based Martin also still manages a law practice in California, oversees an annual legal tech awards program, teaches a law school course on generative AI, runs an annual AI conference, hosts a podcast, and recently launched a legal tech consultancy.

In January 2023, less than two months after ChatGPT first launched, Martin’s company was one of the first to launch a gen AI assistant specifically for lawyers, called LawDroid Copilot. He has since also launched LawDroid Builder, a no-code platform for creating custom AI agents.


Legal training in the age of AI: A leadership imperative — from thomsonreuters.com by The Hon. Maritza Dominguez Braswell  U.S. Magistrate Judge / District of Colorado

In a profession that’s actively contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart


Unlocking the power of AI: Opportunities and use cases for law firms — from todaysconveyancer.co.uk

Artificial intelligence (AI) is here. And it’s already reshaping the way law firms operate. Whether automating repetitive tasks, improving risk management, or boosting efficiency, AI presents a genuine opportunity for forward-thinking legal practices. But with new opportunities come new responsibilities. And as firms explore AI tools, it’s essential they consider how to govern them safely and ethically. That’s where an AI policy becomes indispensable.

So, what can AI actually do for your firm right now? Let’s take a closer look.

 

..which links to:

Duolingo will replace contract workers with AI — from theverge.com by Jay Peters
The company is going to be ‘AI-first,’ says its CEO.

Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.

According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”


Relevant links:

Something strange, and potentially alarming, is happening to the job market for young, educated workers.

According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.

What’s going on? I see three plausible explanations, and each might be a little bit true.


It’s Time To Get Concerned As More Companies Replace Workers With AI — from forbes.com by Jack Kelly

The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.

Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.


The future of career navigation — from medium.com by Sami Tatar

  1. Career navigation market overview

Key takeaway:
Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be. Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.

A.4 How will AI disrupt this segment?
Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.


Tales from the Front – What Teachers Are Telling Me at AI Workshops — from aliciabankhofer.substack.com by Alicia Bankhofer
Real conversations, real concerns: What teachers are saying about AI

“Do I really have to use AI?”

No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”

Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?

But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.

In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.


Welcome To AI Agent World! (Everything you need to know about the AI Agent market.) — from joshbersin.com by Josh Bersin

This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”

I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!


DeepSeek Unveils Prover V2 — from theaivalley.com

DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.

What’s impressive about it?

  • Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
  • Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
  • Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
  • New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.

Artificial Intelligence: Lessons Learned from a Graduate-Level Final Exam — from er.educause.edu by Craig Westling and Manish K. Mishra

The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.


Deep Research with AI: 9 Ways to Get Started — from wondertools.substack.com by Jeremy Caplan
Practical strategies for thorough, citation-rich AI research
.


From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25

All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.

 That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.

Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”


FutureHouse Platform: Superintelligent AI Agents for Scientific Discovery — from futurehouse.org by Michael Skarlinski, Tyler Nadolski, James Braza, Remo Storni, Mayk Caldas, Ludovico Mitchener, Michaela Hinks, Andrew White, &  Sam Rodriques

FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.

 

AI agents arrive in US classrooms — from zdnet.com by Radhika Rajkumar
Kira AI’s personalized learning platform is currently being implemented in Tennessee schools. How will it change education?

AI for education is a new but rapidly expanding field. Can it support student outcomes and help teachers avoid burnout?

On Wednesday, AI education company Kira launched a “fully AI-native learning platform” for K-12 education, complete with agents to assist teachers with repetitive tasks. The platform hosts assignments, analyzes progress data, offers administrative assistance, helps build lesson plans and quizzes, and more.

“Unlike traditional tools that merely layer AI onto existing platforms, Kira integrates artificial intelligence directly into every educational workflow — from lesson planning and instruction to grading, intervention, and reporting,” the release explains. “This enables schools to improve student outcomes, streamline operations, and provide personalized support at scale.”

Also relevant/see:

Coursera Founder Andrew Ng’s New Venture Brings A.I. to K–12 Classrooms — from observer.com by Victor Dey
Andrew Ng’s Kira Learning uses A.I. agents to transform K–12 education with tools for teachers, students and administrators.

“Teachers today are overloaded with repetitive tasks. A.I. agents can change that, and free up their time to give more personalized help to students,” Ng said in a statement.

Kira was co-founded by Andrea Pasinetti and Jagriti Agrawal, both longtime collaborators of Ng. The platform embeds A.I. directly into lesson planning, instruction, grading and reporting. Teachers can instantly generate standards-aligned lesson plans, monitor student progress in real time and receive automated intervention strategies when a student falls behind.

Students, in turn, receive on-demand tutoring tailored to their learning styles. A.I. agents adapt to each student’s pace and mastery level, while grading is automated with instant feedback—giving educators time to focus on teaching.


‘Using GenAI is easier than asking my supervisor for support’ — from timeshighereducation.com
Doctoral researchers are turning to generative AI to assist in their research. How are they using it, and how can supervisors and candidates have frank discussions about using it responsibly?

Generative AI is increasingly the proverbial elephant in the supervisory room. As supervisors, you may be concerned about whether your doctoral researchers are using GenAI. It can be a tricky topic to broach, especially when you may not feel confident in understanding the technology yourself.

While the potential impact of GenAI use among undergraduate and postgraduate taught students, especially, is well discussed (and it is increasingly accepted that students and staff need to become “AI literate”), doctoral researchers often slip through the cracks in institutional guidance and policymaking.


AI as a Thought Partner in Higher Education — from er.educause.edu by Brian Basgen

When used thoughtfully and transparently, generative artificial intelligence can augment creativity and challenge assumptions, making it an excellent tool for exploring and developing ideas.

The glaring contrast between the perceived ubiquity of GenAI and its actual use also reveals fundamental challenges associated with the practical application of these tools. This article explores two key questions about GenAI to address common misconceptions and encourage broader adoption and more effective use of these tools in higher education.


AI for Automation or Augmentation of L&D? — from drphilippahardman.substack.com by Dr. Philippa Hardman
An audio summary of my Learning Technologies talk

Like many of you, I spent the first part of this week at Learning Technologies in London, where I was lucky enough to present a session on the current state of AI and L&D.

In this week’s blog post, I summarise what I covered and share an audio summary of my paper for you to check out.


Bridging the AI Trust Gap — from chronicle.com by Ian Wilhelm, Derek Bruff, Gemma Garcia, and Lee Rainie

In a 2024 Chronicle survey, 86 percent of administrators agreed with the statement: “Generative artificial intelligence tools offer an opportunity for higher education to improve how it educates, operates, and conducts research.” In contrast, just 55 percent of faculty agreed, showing the stark divisions between faculty and administrative perspectives on adopting AI.

Among many faculty members, a prevalent distrust of AI persists — and for valid reasons. How will it impact in-class instruction? What does the popularity of generative AI tools portend for the development of critical thinking skills for Gen-Z students? How can institutions, at the administrative level, develop policies to safeguard against students using these technologies as tools for cheating?

Given this increasing ‘trust gap,’ how can faculty and administrators work together to preserve academic integrity as AI seeps into all areas of academia, from research to the classroom?

Join us for “Bridging the AI Trust Gap,” an extended, 75-minute Virtual Forum exploring the trust gap on campus about AI, the contours of the differences, and what should be done about it.

 

Teens, Social Media and Mental Health — from pewresearch.org by Michelle Faverio, Monica Anderson, and Eugenie Park
Most teens credit social media with feeling more connected to friends. Still, roughly 1 in 5 say social media sites hurt their mental health, and growing shares think they harm people their age

Rising rates of poor mental health among youth have been called a national crisis. While this is often linked to factors like the COVID-19 pandemic or poverty, some officials, like former Surgeon General Vivek Murthy, name social media as a major threat to teenagers.

Our latest survey of U.S. teens ages 13 to 17 and their parents finds that parents are generally more worried than their children about the mental health of teenagers today.

And while both groups call out social media’s impact on young people’s well-being, parents are more likely to make this connection.1

Still, teens are growing more wary of social media for their peers. Roughly half of teens (48%) say these sites have a mostly negative effect on people their age, up from 32% in 2022. But fewer (14%) think they negatively affect them personally.

 

What are colleges’ legal options when threatened with federal funding cuts? — from highereddive.com/ by Lilah Burke
Higher education experts said colleges could work together or lean on their associations if they take up a legal fight against the Trump administration.

Understand your allies
In fact, colleges may struggle to fight the administration on their own.

“I don’t think that institutions should necessarily fight it by themselves,” said Jeffrey Sun, a higher education and law professor at the University of Louisville. “I don’t think they’ll win.”

What will have more power is several institutions, or even many, working together to fight the attacks on higher education.

“I don’t think we have an option unless we work in collective action,” Sun said.


Harvard University won’t yield to Trump administration’s demands— from highereddive.com by Natalie Schwartz
Alan Garber, the Ivy League institution’s president, said the university wouldn’t forfeit its “independence or its constitutional rights.”

Harvard University President Alan Garber said Monday that officials there would not yield to the Trump administration’s litany of demands to maintain access to federal funding, arguing the federal government had overstepped its authority by issuing the ultimatum. 

“The University will not surrender its independence or relinquish its constitutional rights,” Garber wrote in a community message

The move tees up a battle between the Ivy League institution and the Trump administration, which threatened the university with the loss of $9 billion in federal funding over what it claimed was a failure to protect Jewish students from antisemitism.


Harvard Professors Sue the Trump Administration While Other Universities Are Targeted — from iblnews.org

Two groups representing Harvard University professors (the American Association of University Professors and the Harvard faculty chapter) filed a lawsuit against the Trump Administration on Friday, saying that the threat to cut billions in federal funding for the institution violates free speech and other First Amendment rights.

The Trump Administration announced two weeks ago that it reviewed about $9 billion in federal funding that Harvard receives and would send a list of demands to unfreeze the money.

In a statement, Andrew Manuel Crespo, a law professor at Harvard and general counsel of the AAUP-Harvard Faculty Chapter, said the “Trump administration’s policies are a pretext to chill universities and their faculties from engaging in speech, teaching, and research that don’t align with President Trump’s views.”


OPINION: For our republic to survive, education leaders must remain firm in the face of authoritarianism — from hechingerreport.org by Jason E. Glass
We face direct threats to the values around access, opportunity and truth our schools are meant to uphold

Across the country, education leaders are being forced to make some tough decisions — to choose between defending core values, such as equity and historical truth, or yielding to political coercion in hopes of avoiding conflict. There is no strategy that does not involve conflict and trade-offs. Every education leader operates in their own political context with unique legal and cultural constraints.

But make no mistake: Inaction is not neutral. Even the decision to do nothing is a choice, one that has consequences.


Northwestern to self-fund federally threatened research — from highereddive.com by Laura Spitalniak
Leaders at the well-known institution said the support would sustain “vital research” until they had a “better understanding of the funding landscape.”

Northwestern University will pull from its coffers to continue funding “vital research” that has been threatened by the Trump administration, the private institution announced Thursday.


Trump is bullying, blackmailing and threatening colleges, and they are just beginning to fight back — from hechingerreport.org by Liz Willen
After Harvard rejected the president’s demands, more university leaders have started to speak out — but many say a bigger response is needed

Many hope it is the beginning of a new resistance in higher education. “Harvard’s move gives others permission to come out on the ice a little,” McGuire said. “This is an answer to the tepid and vacillating presidents who said they don’t want to draw attention to themselves.”

Harvard paved the way for other institutions to stand up to the administration’s demands, Ted Mitchell, president of the American Council on Education, noted in an interview with NPR this week.

Stanford University President Jonathan Levin immediately backed Harvard, noting that “the way to bring about constructive change is not by destroying the nation’s capacity for scientific research, or through the government taking command of a private institution.”

“I tell them, you will never regret doing what is right, but if you allow yourself to be co-opted, you will have regret that you caved to a dictator who doesn’t care about you or your institution.”

 

How People Are Really Using Gen AI in 2025 — from hbr.org by Marc Zao-Sanders

.

.


Here’s why you shouldn’t let AI run your company — from theneurondaily.com by Grant Harvey; emphasis DSC

When “vibe-coding” goes wrong… or, a parable in why you shouldn’t “vibe” your entire company.
Cursor, an AI-powered coding tool that many developers love-to-hate, face-planted spectacularly yesterday when its own AI support bot went off-script and fabricated a company policy, leading to a complete user revolt.

Here’s the short version:

  • A bug locked Cursor users out when switching devices.
  • Instead of human help, Cursor’s AI support bot confidently told users this was a new policy (it wasn’t).
  • No human checked the replies—big mistake.
  • The fake news spread, and devs canceled subscriptions en masse.
  • A Reddit thread about it got mysteriously nuked, fueling suspicion.

The reality? Just a bug, plus a bot hallucination… doing maximum damage.

Why it matters: This is what we’d call “vibe-companying”—blindly trusting AI with critical functions without human oversight.

Think about it like this: this was JUST a startup. If more big corporations continue to lay off entire departments, replaced by AI, these already byzantine companies will become increasingly more opaque, unaccountable systems where no one, human or AI, fully understands what’s happening or who’s responsible.

Our take? Kafka dude has it right. We need to pay attention to WHAT we’re actually automating. Because automating more bureaucracy at scale, with agents we increasingly don’t understand or don’t double check, can potentially make companies less intelligent—and harder to fix when things inevitably go wrong.


 

 

Thomson Reuters Survey: Over 95% of Legal Professionals Expect Gen AI to Become Central to Workflow Within Five Year — from lawnext.com by Bob Ambrogi

Thomson Reuters today released its 2025 Generative AI in Professional Services Report, and it reveals that legal professionals have become increasingly optimistic about generative AI, with adoption rates nearly doubling over the past year and a growing belief that the technology should be incorporated into legal work.

According to the report, 26% of legal organizations are now actively using gen AI, up from 14% in 2024. While only 15% of law firm respondents say gen AI is currently central to their workflow, a striking 78% believe it will become central within the next five years.


AI-Powered Legal Work Redefined: Libra Launches Major Update for Legal Professionals — from lawnext.com by Bob Ambrogi

Berlin, April 14, 2025 – Berlin-based Legal Tech startup Libra is launching its most comprehensive update to date, leveraging AI to relieve law firms and legal departments of routine tasks, accelerate research, and improve team collaboration. “Libra v2” combines highly developed AI, a modern user interface, and practical tools to set a new standard for efficient and precise work in all legal areas.

“We listened intently to feedback from law firms and in-house teams,” said Viktor von Essen, founder of Libra. “The result is Libra v2: an AI solution that intelligently supports every step of daily legal work – from initial research to final contract review. We want legal experts to be able to fully concentrate on what is essential: excellent legal advice.”


The Three Cs of Teaching Technology to Law Students — from lawnext.com by Bob Ambrogi

In law practice today, technology is no longer optional — it’s essential. As practicing attorneys increasingly rely on technology tools to serve clients, conduct research, manage documents and streamline workflows, the question is often debated: Are law schools adequately preparing students for this reality?

Unfortunately, for the majority of law schools, the answer is no. But that only begs the question: What should they be doing?

A coincidence of events last week had me thinking about law schools and legal tech, chief among them my attendance at LIT Con, Suffolk Law School’s annual conference to showcase legal innovation and technology — with a portion of it devoted to access-to-justice projects developed by Suffolk Law students themselves.


While not from Bob, I’m also going to include this one here:

Your AI Options: 7 Considerations Before You Buy — from artificiallawyer.com by Liza Pestillos-Ocat

But here’s the problem: not all AI is useful and not all of it is built for the way your legal team works.

Most firms aren’t asking whether they should use AI because they already are. The real question now is what comes next? How do you expand the value of AI across more teams, more matters, and more workflows without introducing unnecessary risk, complexity, or cost?

To get this right, legal professionals need to understand which tools will solve real problems and deliver the most value to their team. That starts with asking better questions, including the ones that follow, before making your next investment in AI for lawyers.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.


Addendum on 4/16:

 

Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.


OpenAI’s Deep Research Agent Is Coming for White-Collar Work — from wired.com by Will Knight
The research-focused agent shows how a new generation of more capable AI models could automate some office tasks.

Isla Fulford, a researcher at OpenAI, had a hunch that Deep Research would be a hit even before it was released.

Fulford had helped build the artificial intelligence agent, which autonomously explores the web, deciding for itself what links to click, what to read, and what to collate into an in-depth report. OpenAI first made Deep Research available internally; whenever it went down, Fulford says, she was inundated with queries from colleagues eager to have it back. “The number of people who were DMing me made us pretty excited,” says Fulford.

Since going live to the public on February 2, Deep Research has proven to be a hit with many users outside the company too.


Nvidia to open quantum computing research center in Boston — from seekingalpha.com by Ravikash Bakolia

Nvidia (NASDAQ:NVDA) will open a quantum computing research lab in Boston which is expected to start operations later this year.

The Nvidia Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing, said the company in a March 18 press release.

Nvidia’s CEO Jensen Huang also made this announcement on Thursday at the company’s first-ever Quantum Day at its annual GTC event.


French quantum computer firm Pasqal links up with NVIDIA — from reuters.com

PARIS, March 21 (Reuters) – Pasqal, a fast-growing French quantum computer start-up company, announced on Friday a partnership with chip giant Nvidia (NVDA.O), opens new tab whereby Pasqal’s customers would gain access to more tools to develop quantum applications.

Pasqal said it would connect its quantum computing units and cloud platform onto NVIDIA’s open-source platform called CUDA-Q.


Introducing next-generation audio models in the API — from openai.com
A new suite of audio models to power voice agents, now available to developers worldwide.

Today, we’re launching new speech-to-text and text-to-speech audio models in the API—making it possible to build more powerful, customizable, and intelligent voice agents that offer real value. Our latest speech-to-text models set a new state-of-the-art benchmark, outperforming existing solutions in accuracy and reliability—especially in challenging scenarios involving accents, noisy environments, and varying speech speeds. These improvements increase transcription reliability, making the models especially well-suited for use cases like customer call centers, meeting note transcription, and more.


 

8 Weeks Left to Prepare Students for the AI-Enhanced Workplace — from insidehighered.com by Ray Schroeder
We are down to the final weeks left to fully prepare students for entry into the AI-enhanced workplace. Are your students ready?

The urgent task facing those of us who teach and advise students, whether they be degree program or certificate seeking, is to ensure that they are prepared to enter (or re-enter) the workplace with skills and knowledge that are relevant to 2025 and beyond. One of the first skills to cultivate is an understanding of what kinds of services this emerging technology can provide to enhance the worker’s productivity and value to the institution or corporation.

Given that short period of time, coupled with the need to cover the scheduled information in the syllabus, I recommend that we consider merging AI use into authentic assignments and assessments, supplementary modules, and other resources to prepare for AI.


Learning Design in the Era of Agentic AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Aka, how to design online async learning experiences that learners can’t afford to delegate to AI agents

The point I put forward was that the problem is not AI’s ability to complete online async courses, but that online async courses courses deliver so little value to our learners that they delegate their completion to AI.

The harsh reality is that this is not an AI problem — it is a learning design problem.

However, this realisation presents us with an opportunity which we overall seem keen to embrace. Rather than seeking out ways to block AI agents, we seem largely to agree that we should use this as a moment to reimagine online async learning itself.



8 Schools Innovating With Google AI — Here’s What They’re Doing — from forbes.com by Dan Fitzpatrick

While fears of AI replacing educators swirl in the public consciousness, a cohort of pioneering institutions is demonstrating a far more nuanced reality. These eight universities and schools aren’t just experimenting with AI, they’re fundamentally reshaping their educational ecosystems. From personalized learning in K-12 to advanced research in higher education, these institutions are leveraging Google’s AI to empower students, enhance teaching, and streamline operations.


Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.

 

Introducing NextGenAI: A consortium to advance research and education with AI — from openai.com; via Claire Zau
OpenAI commits $50M in funding and tools to leading institutions.

Today, we’re launching NextGenAI, a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education.

AI has the power to drive progress in research and education—but only when people have the right tools to harness it. That’s why OpenAI is committing $50M in research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge.

Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future.


 ‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI — from theguardian.com by Aaron Mok; via Claire Zau
As AI grows increasingly prevalent, some are showing their children tools from ChatGPT to Dall-E to learn and bond

“My goal isn’t to make him a generative AI wizard,” White said. “It’s to give him a foundation for using AI to be creative, build, explore perspectives and enrich his learning.”

White is part of a growing number of parents teaching their young children how to use AI chatbots so they are prepared to deploy the tools responsibly as personal assistants for school, work and daily life when they’re older.

 

Blind Spot on AI — from the-job.beehiiv.com by Paul Fain
Office tasks are being automated now, but nobody has answers on how education and worker upskilling should change.

Students and workers will need help adjusting to a labor market that appears to be on the verge of a historic disruption as many business processes are automated. Yet job projections and policy ideas are sorely lacking.

The benefits of agentic AI are already clear for a wide range of organizations, including small nonprofits like CareerVillage. But the ability to automate a broad range of business processes means that education programs and skills training for knowledge workers will need to change. And as Chung writes in a must-read essay, we have a blind spot with predicting the impacts of agentic AI on the labor market.

“Without robust projections,” he writes, “policymakers, businesses, and educators won’t be able to come to terms with how rapidly we need to start this upskilling.”

 
© 2025 | Daniel Christian