Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized learning and to bring down the costs of obtaining credentials, the rising costs of higher education, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read — especially if you are working within higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.

 

The 2025 ABA Techshow Startup Alley Pitch Competition Ended In A Tie – Here Are The Winners — from lawnext.com by Bob Ambrogi

This year, two startups ended up with an equal number of votes for the top spot:

  • Case Crafter, a company from Norway that helps legal professionals build compelling visual timelines based on case files and evidence.
  • Querious, a product that provides attorneys with real-time insights during client conversations into legal issues, relevant content, and suggested questions and follow-ups.
    .


AI academy gives law students a head start on legal tech, says OBA innovator — from canadianlawyermag.com by Branislav Urosevic

The Ontario Bar Association has recently launched a hands-on AI learning platform tailored for lawyers. Called the AI Academy, the initiative is designed to help legal professionals explore, experiment with, and adopt AI tools relevant to their practice.

Colin Lachance, OBA’s innovator-in-residence and the lead designer of the platform, says that although the AI Academy was built for practising lawyers, it is also well-suited for law students.


 

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously — from venturebeat.com by Michael Nuñez|

Uplimit unveiled a suite of AI-powered learning agents today designed to help companies rapidly upskill employees while dramatically reducing administrative burdens traditionally associated with corporate training.

The San Francisco-based company announced three sets of purpose-built AI agents that promise to change how enterprises approach learning and development: skill-building agents, program management agents, and teaching assistant agents. The technology aims to address the growing skills gap as AI advances faster than most workforces can adapt.

“There is an unprecedented need for continuous learning—at a scale and speed traditional systems were never built to handle,” said Julia Stiglitz, CEO and co-founder of Uplimit, in an interview with VentureBeat. “The companies best positioned to thrive aren’t choosing between AI and their people—they’re investing in both.”


Introducing Claude for Education — from anthropic.com

Today we’re launching Claude for Education, a specialized version of Claude tailored for higher education institutions. This initiative equips universities to develop and implement AI-enabled approaches across teaching, learning, and administration—ensuring educators and students play a key role in actively shaping AI’s role in society.

As part of announcing Claude for Education, we’re introducing:

  1. Learning mode: A new Claude experience that guides students’ reasoning process rather than providing answers, helping develop critical thinking skills
  2. University-wide Claude availability: Full campus access agreements with Northeastern University, London School of Economics and Political Science (LSE), and Champlain College, making Claude available to all students
  3. Academic partnerships: Joining Internet2 and working with Instructure to embed AI into teaching & learning with Canvas LMS
  4. Student programs: A new Claude Campus Ambassadors program along with an initiative offering API credits for student projects

A comment on this from The Rundown AI:

Why it matters: Education continues to grapple with AI, but Anthropic is flipping the script by making the tech a partner in developing critical thinking rather than an answer engine. While the controversy over its use likely isn’t going away, this generation of students will have access to the most personalized, high-quality learning tools ever.


Should College Graduates Be AI Literate? — from chronicle.com by Beth McMurtrie (behind a paywall)
More institutions are saying yes. Persuading professors is only the first barrier they face.

Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.


Showing Up for the Future: Why Educators Can’t Sit Out the AI Conversation — from marcwatkins.substack.com with a guest post from Lew Ludwig

The Risk of Disengagement
Let’s be honest: most of us aren’t jumping headfirst into AI. At many of our institutions, it’s not a gold rush—it’s a quiet standoff. But the group I worry most about isn’t the early adopters. It’s the faculty who’ve decided to opt out altogether.

That choice often comes from a place of care. Concerns about data privacy, climate impact, exploitative labor, and the ethics of using large language models are real—and important. But choosing not to engage at all, even on ethical grounds, doesn’t remove us from the system. It just removes our voices from the conversation.

And without those voices, we risk letting others—those with very different priorities—make the decisions that shape what AI looks like in our classrooms, on our campuses, and in our broader culture of learning.



Turbocharge Your Professional Development with AI — from learningguild.com by Dr. RK Prasad

You’ve just mastered a few new eLearning authoring tools, and now AI is knocking on the door, offering to do your job faster, smarter, and without needing coffee breaks. Should you be worried? Or excited?

If you’re a Learning and Development (L&D) professional today, AI is more than just a buzzword—it’s transforming the way we design, deliver, and measure corporate training. But here’s the good news: AI isn’t here to replace you. It’s here to make you better at what you do.

The challenge is to harness its potential to build digital-ready talent, not just within your organization but within yourself.

Let’s explore how AI is reshaping L&D strategies and how you can leverage it for professional development.


5 Recent AI Notables — from automatedteach.com by Graham Clay

1. OpenAI’s New Image Generator
What Happened: OpenAI integrated a much more powerful image generator directly into GPT-4o, making it the default image creator in ChatGPT. Unlike previous image models, this one excels at accurately rendering text in images, precise visualization of diagrams/charts, and multi-turn image refinement through conversation.

Why It’s Big: For educators, this represents a significant advancement in creating educational visuals, infographics, diagrams, and other instructional materials with unprecedented accuracy and control. It’s not perfect, but you can now quickly generate custom illustrations that accurately display mathematical equations, chemical formulas, or process workflows — previously a significant hurdle in digital content creation — without requiring graphic design expertise or expensive software. This capability dramatically reduces the time between conceptualizing a visual aid and implementing it in course materials.
.


The 4 AI modes that will supercharge your workflow — from aiwithallie.beehiiv.com by Allie K. Miller
The framework most people and companies won’t discover until 2026


 

Five Legal Tech Insights From New York — from artificiallawyer.com by Richard Tromans
A week spent in Manhattan gave Artificial Lawyer plenty to think about. Here are five insights inspired by a series of ‘New York moments’, often about legal AI.

On the way back from Paddington the cab driver was also questioned on the topic. He replied with wisdom: ‘When they made Heathrow Express a lot of us feared it would take away work. The funny thing is, it pushes more people to Paddington and generates a steady flow of fares. Before, you might spend ages getting to Heathrow with one passenger and sometimes have to drive all the way back with no fare.’

The end result: the effort to increase speed and efficiency ended up making the taxi drivers of London much happier and their lives more flexible. Whereas the taxi drivers of New York remain stuck doing huge, one-off journeys, while the general public suffers high costs and slow – and unpredictable – travel times.

Now, one wonders where there could be a connection to how the legal world works…..?


ABA TECHSHOW 2025 to spotlight future of legal technology — from americanbar.org
Artificial intelligence, cloud-based practice management, data privacy and e-discovery will be among the hot topics featured at the American Bar Association TECHSHOW 2025, which spotlights the most useful and practical technologies available in the legal industry, April 2-5 in Chicago.


Legaltech leaders roundtable: The challenges and emerging best practices of GenAI adoption — from legaltechnology.com

One of the key themes to emerge was the need to encourage creativity and open mindedness  around use cases. Conan Hines, Fried Frank’s director of practice innovation, said: “I felt a lot of ‘imaginative play’ vibes. This is where we give lawyers secure AI tools and support to explore the possibilities. The support is even more interesting as innovative teams are complementing their current staff with behavioural science and anthropological approaches to unlock this potential.”


Inhouse World Is Embracing Legal AI – Survey — from artificiallawyer.com

 

7 ways to use ChatGPT’s new image AI — from wondertools.substack.com by Jeremy Caplan
Transform your ideas into strong visuals

7 ways to use ChatGPT’s new image AI

  • Cartoons
  • Infographics
  • Posters
  • …plus several more

 

MIT Reveals 2025 Breakthrough Tech At SXSW: What It Means For Legal — from abovethelaw.com by Stephen Embry
The future isn’t just about adopting new technology — it’s about strategically applying it to solve the right problems.

Why This Matters for Law and Legal Tech
Firth emphasized that one of the key criteria for selecting technologies is their broader relevance — what problem do they solve? Here’s how some of these breakthroughs could impact the legal industry:

Small Language Models and Legal AI – Unlike large AI models trained on vast public datasets, small language models can be built on private, secure datasets, making them ideal for legal applications. Law firms and in-house legal teams could develop AI tools trained on their own cases and internal documents, improving efficiency while maintaining confidentiality. These models also require far less computational power, making them more practical and cost-effective.

Use of these models have lots of applications for law. They could be used on large e-discovery data sets. They could be used to access a law firm’s past efforts. They could mine clients data to provide answers to legal questions efficiently. For that matter, they could allow in house legal to answer questions from company data without engaging outside counsel on certain issues.

 

AI in Education Survey: What UK and US Educators Think in 2025 — from twinkl.com
As artificial intelligence (AI) continues to shape the world around us, Twinkl conducted a large-scale survey between January 15th and January 22nd to explore its impact on the education sector, as well as the work lives of teachers across the UK and the USA.

Teachers’ use of AI for work continues to rise
Twinkl’s survey asked teachers whether they were currently using AI for work purposes. Comparing these findings to similar surveys over recent years shows the use of AI tools by teachers has seen a significant increase across both the UK and USA.

  • According to two UK surveys by the National Literacy Trust – 30% of teachers used generative AI in 2023 and nearly half (47.7%) in 2024. Twinkl’s survey indicates that AI adoption continues to rise rapidly, with 60% of UK educators currently integrating it into their work lives in 2025.
  • Similarly, with 62% of US teachers currently using AI for work, uptake appears to have risen greatly in the past 12 months, with just 25% saying they were leveraging the new technology in the 2023-24 school year according to a RAND report.
  • Teachers are using AI more for work than in their personal lives: In the UK, personal usage drops to 43% (from 60% at school).  In the US, 52% are using AI for non-work purposes (versus 62% in education settings).

    60% of UK teachers and 62% of US teachers use AI in their work life in 2025.

 

Stat(s) Of The Week: A Big Gap In Legal Tech Satisfaction — from abovethelaw.com by Jeremy Barke
Comparing sentiment across the pond. 

Legal tech users in the U.S. and the U.K. report widely different levels of satisfaction with their systems, according to a new survey, raising questions about how companies are meeting lawyers’ needs.

According to “The State of Legal Tech Adoption” report by London-based Definely, 51% of U.S. respondents say they’re satisfied with the ROI of their legal technology, while only 22% of U.K. respondents say the same.


Legal tech company Clio acquires AI-focused platform specializing in large firms — from abajournal.com by Danielle Braff

Legal technology company Clio announced [on 3/13/25] that it acquired ShareDo, an artificial intelligence-focused platform specializing in large law firms.

The move represents a major departure for Clio, which was founded in 2008 and is based in Vancouver, British Columbia. The practice management software platform originally focused on solo, small and midsize firms.

“ShareDo has built a powerhouse, proving that large firms are hungry for smarter, faster and more flexible technology,” said Jack Newton, the CEO and founder of Clio, in a statement. “The large law firm market is on the brink of a major shift, and this acquisition cements our role in leading that change.”


How Wexler AI is transforming legal fact analysis and case strategy — from tech.eu by Cate Lawrence
Wexler AI has developed an AI-embedded platform that enables lawyers to uncover key facts, identify inconsistencies, and streamline case preparation. 

It core functionalities include:

  • Advanced fact extraction and analysis: The system can process up to 500,000 documents simultaneously, surfacing critical facts and connections that might otherwise go unnoticed.
  • Chronology creation: Lawyers collaborate with Wexler AI to construct detailed timelines from extensive document sets, ensuring transparency in how key facts are selected and connected.
  • Inconsistency mapping: The AI detects contradictions between testimony and evidence, enhancing cross-examination and case strategy development.

 

Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.


OpenAI’s Deep Research Agent Is Coming for White-Collar Work — from wired.com by Will Knight
The research-focused agent shows how a new generation of more capable AI models could automate some office tasks.

Isla Fulford, a researcher at OpenAI, had a hunch that Deep Research would be a hit even before it was released.

Fulford had helped build the artificial intelligence agent, which autonomously explores the web, deciding for itself what links to click, what to read, and what to collate into an in-depth report. OpenAI first made Deep Research available internally; whenever it went down, Fulford says, she was inundated with queries from colleagues eager to have it back. “The number of people who were DMing me made us pretty excited,” says Fulford.

Since going live to the public on February 2, Deep Research has proven to be a hit with many users outside the company too.


Nvidia to open quantum computing research center in Boston — from seekingalpha.com by Ravikash Bakolia

Nvidia (NASDAQ:NVDA) will open a quantum computing research lab in Boston which is expected to start operations later this year.

The Nvidia Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing, said the company in a March 18 press release.

Nvidia’s CEO Jensen Huang also made this announcement on Thursday at the company’s first-ever Quantum Day at its annual GTC event.


French quantum computer firm Pasqal links up with NVIDIA — from reuters.com

PARIS, March 21 (Reuters) – Pasqal, a fast-growing French quantum computer start-up company, announced on Friday a partnership with chip giant Nvidia (NVDA.O), opens new tab whereby Pasqal’s customers would gain access to more tools to develop quantum applications.

Pasqal said it would connect its quantum computing units and cloud platform onto NVIDIA’s open-source platform called CUDA-Q.


Introducing next-generation audio models in the API — from openai.com
A new suite of audio models to power voice agents, now available to developers worldwide.

Today, we’re launching new speech-to-text and text-to-speech audio models in the API—making it possible to build more powerful, customizable, and intelligent voice agents that offer real value. Our latest speech-to-text models set a new state-of-the-art benchmark, outperforming existing solutions in accuracy and reliability—especially in challenging scenarios involving accents, noisy environments, and varying speech speeds. These improvements increase transcription reliability, making the models especially well-suited for use cases like customer call centers, meeting note transcription, and more.


 

From DSC:
Look out Google, Amazon, and others! Nvidia is putting the pedal to the metal in terms of being innovative and visionary! They are leaving the likes of Apple in the dust.

The top talent out there is likely to go to Nvidia for a while. Engineers, programmers/software architects, network architects, product designers, data specialists, AI researchers, developers of robotics and autonomous vehicles, R&D specialists, computer vision specialists, natural language processing experts, and many more types of positions will be flocking to Nvidia to work for a company that has already changed the world and will likely continue to do so for years to come. 



NVIDIA’s AI Superbowl — from theneurondaily.com by Noah and Grant
PLUS: Prompt tips to make AI writing more natural

That’s despite a flood of new announcements (here’s a 16 min video recap), which included:

  1. A new architecture for massive AI data centers (now called “AI factories”).
  2. A physics engine for robot training built with Disney and DeepMind.
  3. partnership with GM to develop next-gen vehicles, factories and robots.
  4. A new Blackwell chip with “Dynamo” software that makes AI reasoning 40x faster than previous generations.
  5. A new “Rubin” chip slated for 2026 and a “Feynman” chip set for 2028.

For enterprises, NVIDIA unveiled DGX Spark and DGX Station—Jensen’s vision of AI-era computing, bringing NVIDIA’s powerful Blackwell chip directly to your desk.


Nvidia Bets Big on Synthetic Data — from wired.com by Lauren Goode
Nvidia has acquired synthetic data startup Gretel to bolster the AI training data used by the chip maker’s customers and developers.


Nvidia, xAI to Join BlackRock and Microsoft’s $30 Billion AI Infrastructure Fund — from investopedia.com by Aaron McDade
Nvidia and xAI are joining BlackRock and Microsoft in an AI infrastructure group seeking $30 billion in funding. The group was first announced in September as BlackRock and Microsoft sought to fund new data centers to power AI products.



Nvidia CEO Jensen Huang says we’ll soon see 1 million GPU data centers visible from space — from finance.yahoo.com by Daniel Howley
Nvidia CEO Jensen Huang says the company is preparing for 1 million GPU data centers.


Nvidia stock stems losses as GTC leaves Wall Street analysts ‘comfortable with long term AI demand’ — from finance.yahoo.com by Laura Bratton
Nvidia stock reversed direction after a two-day slide that saw shares lose 5% as the AI chipmaker’s annual GTC event failed to excite investors amid a broader market downturn.


Microsoft, Google, and Oracle Deepen Nvidia Partnerships. This Stock Got the Biggest GTC Boost. — from barrons.com by Adam Clark and Elsa Ohlen


The 4 Big Surprises from Nvidia’s ‘Super Bowl of AI’ GTC Keynote — from barrons.com by Tae Kim; behind a paywall

AI Super Bowl. Hi everyone. This week, 20,000 engineers, scientists, industry executives, and yours truly descended upon San Jose, Calif. for Nvidia’s annual GTC developers’ conference, which has been dubbed the “Super Bowl of AI.”


 

20 AI Agent Examples in 2025 — from autogpt.net

AI Agents are now deeply embedded in everyday life and?quickly transforming industry after industry. The global AI market is expected to explode up to $1.59 trillion by 2030! That is a?ton of intelligent agents operating behind the curtains.

That’s why in this article, we explore?20 real-life AI Agents that are causing a stir today.


Top 100 Gen AI apps, new AI video & 3D — from eatherbcooper.substack.com by Heather Cooper
Plus Runway Restyle, Luma Ray2 img2vid keyframes & extend

?In the latest edition of Andreessen Horowitz’s “Top 100 Gen AI Consumer Apps,” the generative AI landscape has undergone significant shifts.

Notably, DeepSeek has emerged as a leading competitor to ChatGPT, while AI video models have advanced from experimental stages to more reliable tools for short clips. Additionally, the rise of “vibecoding” is broadening the scope of AI creators.

The report also introduces the “Brink List,” highlighting ten companies poised to enter the top 100 rankings.?


AI is Evolving Fast – The Latest LLMs, Video Models & Breakthrough Tools — from heatherbcooper.substack.com by Heather Cooper
Breakthroughs in multimodal search, next-gen coding assistants, and stunning text-to-video tech. Here’s what’s new:

I do these comparisons frequently to measure the improvements in different models for text or image to video prompts. I hope it is helpful for you, as well!

I included 6 models for an image to video comparison:

  • Pika 2.1 (I will do one with Pika’s new 2.2 model soon)
  • Adobe Firefly Video
  • Runway Gen-3
  • Kling 1.6
  • Luma Ray2
  • Hailuo I2V-01


Why Smart Companies Are Granting AI Immunity to Their Employees — from builtin.com by Matt Almassian
Employees are using AI tools whether they’re authorized or not. Instead of cracking down on AI usage, consider developing an AI amnesty program. Learn more.

But the smartest companies aren’t cracking down. They’re flipping the script. Instead of playing AI police, they’re launching AI amnesty programs, offering employees a safe way to disclose their AI usage without fear of punishment. In doing so, they’re turning a security risk into an innovation powerhouse.

Before I dive into solutions, let’s talk about what keeps your CISO or CTO up at night. Shadow AI isn’t just about unauthorized tool usage — it’s a potential dirty bomb of security, compliance and operational risks that could explode at any moment.

6 Steps to an AI Amnesty Program

  1. Build your AI governance foundation.
  2. Transform your IT department from gatekeeper to innovation partner.
  3. Make AI education easily accessible.
  4. Deploy your technical safety net.
  5. Create an AI-positive culture.
  6. Monitor, adapt and evolve.

A first-ever study on prompts… — from theneurondaily.com
PLUS: OpenAI wants to charge $20K a month to replace you?!

What they discovered might change how you interact with AI:

  • Consistency is a major problem. The researchers asked the same questions 100 times and found models often give different answers to the same question.
  • Formatting matters a ton. Telling the AI exactly how to structure its response consistently improved performance.
  • Politeness is… complicated. Saying “please” helped the AI answer some questions but made it worse at others. Same for being commanding (“I order you to…”).
  • Standards matter. If you need an AI to be right 100% of the time, you’re in trouble.

That’s also why we think you, an actual human, should always place yourself as a final check between whatever your AI creates and whatever goes out into the world.


Leave it to Manus
“Manus is a general AI agent that bridges minds and actions: it doesn’t just think, it delivers results. Manus excels at various tasks in work and life, getting everything done while you rest.”

From DSC:
What could possibly go wrong?!



AI Search Has A Citation Problem — from cjr.org (Columbia Journalism Review) by Klaudia Ja?wi?ska and Aisvarya Chandrasekar
We Compared Eight AI Search Engines. They’re All Bad at Citing News.

We found that…

Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.

  • Premium chatbots provided more confidently incorrect answers than their free counterparts.
  • Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
  • Generative search tools fabricated links and cited syndicated and copied versions of articles.
  • Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.

Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.


5 new AI tools you’ll actually want to try — from wondertools.substack.com by Jeremy Kaplan
Chat with lifelike AI, clean up audio instantly, and reimagine your career

Hundreds of AI tools emerge every week. I’ve picked five new ones worth exploring. They’re free to try, easy to use, and signal new directions for useful AI.

Example:

Career Dreamer
A playful way to explore career possibilities with AI


 

Who does need college anymore? About that book title … — from Education Design Lab

As you may know, Lab founder Kathleen deLaski just published a book with a provocative title: Who Needs College Anymore? Imagining a Future Where Degrees Won’t Matter.

Kathleen is asked about the title in every media interview, before and since the Feb. 25 book release. “It has generated a lot of questions,” she said in our recent book chat. “I tell people to focus on the word, ‘who.’ Who needs college anymore? That’s in keeping with the design thinking frame, where you look at the needs of individuals and what needs are not being met.”

In the same conversation, Kathleen reminded us that only 38% of American adults have a four-year degree. “We never talk about the path to the American dream for the rest of folks,” she said. “We currently are not supporting the other really interesting pathways to financial sustainability — apprenticeships, short-term credentials. And that’s really why I wrote the book, to push the conversation around the 62% of who we call New Majority Learners at the Lab, the people for whom college was not designed.” Watch the full clip

She distills the point into one sentence in this SmartBrief essay:  “The new paradigm is a ‘yes and’ paradigm that embraces college and/or other pathways instead of college or bust.”

What can colleges do moving forward?
In this excellent Q&A with Inside Higher Ed, Kathleen shares her No. 1 suggestion: “College needs to be designed as a stepladder approach, where people can come in and out of it as they need, and at the very least, they can build earnings power along the way to help afford a degree program.”

In her Hechinger Report essay, Kathleen lists four more steps colleges can take to meet the demand for more choices, including “affordability must rule.”

From white-collar apprenticeships and micro-credential programs at local community colleges to online bootcamps, self-instruction using YouTube, and more—students are forging alternative paths to GREAT high-paying jobs. (source)

 

The $100 billion disruption: How AI is reshaping legal tech — from americanbazaaronline.com by Rohan Hundia and Rajesh Mehta

The Size of the Problem: Judicial Backlog and Inefficiencies
India has a massive backlog of more than 47 million pending cases, with civil litigation itself averaging 1,445 days in resolution. In the United States, federal courts dispose of nearly 400,000 cases a year, and complex litigations take years to complete. Artificial intelligence-driven case law research, contract automation, and predictive analytics will cut legal research times by 90%, contract drafting fees by 60%, and hasten case settlements, potentially saving billions of dollars in legal costs.

This is not just an evolution—it is a permanent change toward data-driven jurisprudence, with AI supplementing human capabilities, speeding up delivery of justice, and extending access to legal services. The AI revolution for legal tech is not on its way; it is already under way, dismantling inefficiencies and transforming the legal world in real time.


Scaling and Improving Legal Tech Projects — from legaltalknetwork.com by Taylor Sartor, Luigi Bai, David Gray, and Cat Moon

Legal tech innovators discuss how they are working to scale and improve their successful projects on Talk Justice. FosterPower and Legal Aid Content Intelligence (LACI) leverage technology to make high-quality legal information available to people for free online. Both also received Technology Initiative Grants (TIG) from the Legal Services Corporation to launch their projects. Then, in 2024 they were both selected for a different TIG, called the Sustainability, Enhancement and Adoption (SEA) grant. This funding supports TIG projects that have demonstrated excellent results as they improve their tools and work to increase uptake.

 

Introducing NextGenAI: A consortium to advance research and education with AI — from openai.com; via Claire Zau
OpenAI commits $50M in funding and tools to leading institutions.

Today, we’re launching NextGenAI, a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education.

AI has the power to drive progress in research and education—but only when people have the right tools to harness it. That’s why OpenAI is committing $50M in research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge.

Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future.


 ‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI — from theguardian.com by Aaron Mok; via Claire Zau
As AI grows increasingly prevalent, some are showing their children tools from ChatGPT to Dall-E to learn and bond

“My goal isn’t to make him a generative AI wizard,” White said. “It’s to give him a foundation for using AI to be creative, build, explore perspectives and enrich his learning.”

White is part of a growing number of parents teaching their young children how to use AI chatbots so they are prepared to deploy the tools responsibly as personal assistants for school, work and daily life when they’re older.

 
© 2025 | Daniel Christian