Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.

Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.

  • AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
  • ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
  • ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.

Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.

The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.

 

Also see Meeker’s actual report at:

Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey



The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.

The details:

  • Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
  • The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
  • The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
  • Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.

Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.

 

Astronaut one day, artist the next: How to help children explore the world of careers — from apnews.com by Cathy Bussewitz

Sometimes career paths follow a straight line, with early life ambitions setting us on a clear path to training or a degree and a specific profession. Just as often, circumstance, luck, exposure and a willingness to adapt to change influence what we do for a living.

Developmental psychologists and career counselors recommend exposing children to a wide variety of career paths at a young age.

“It’s not so that they’ll pick a career, but that they will realize that there’s lots of opportunities and not limit themselves out of careers,” said Jennifer Curry, a Louisiana State University professor who researches career and college readiness.

Preparing for a world of AI
In addition to exposing children to career routes through early conversations and school courses, experts recommend teaching children about artificial intelligence and how it is reshaping the world and work.

 

The 2025 Global Skills Report— from coursera.org
Discover in-demand skills and credentials trends across 100+ countries and six regions to deliver impactful industry-aligned learning programs.

GenAI adoption fuels global skill demands
In 2023, early adopters flocked to GenAI, with approximately one person per minute enrolling in a GenAI course on Coursera —a rate that rose to eight per minute in 2024.  Since then, GenAI has continued to see exceptional growth, with global enrollment in GenAI courses surging 195% year-over-year—maintaining its position as one of the most rapidly growing skill domains on our platform. To date, Coursera has recorded over 8 million GenAI enrollments, with 12 learners per minute signing up for GenAI content in 2025 across our catalog of nearly 700 GenAI courses.

Driving this surge, 94% of employers say they’re likely to hire candidates with GenAI credentials, while 75% prefer hiring less-experienced candidates with GenAI skills over more experienced ones without these capabilities.8 Demand for roles such as AI and Machine Learning Specialists is projected to grow by up to 40% in the next four years.9 Mastering AI fundamentals—from prompt engineering to large language model (LLM) applications—is essential to remaining competitive in today’s rapidly evolving economy.

Countries leading our new AI Maturity Index— which highlights regions best equipped to harness AI innovation and translate skills into real-world applications—include global frontrunners such as Singapore, Switzerland, and the United States.

Insights in action

Businesses
Integrate role-specific GenAI modules into employee development programs, enabling teams to leverage AI for efficiency and innovation.

Governments
Scale GenAI literacy initiatives—especially in emerging economies—to address talent shortages and foster human-machine capabilities needed to future-proof digital jobs.

Higher education
Embed credit-eligible GenAI learning into curricula, ensuring graduates enter the workforce job-ready.

Learners
Focus on GenAI courses offering real-world projects (e.g., prompt engineering) that help build skills for in-demand roles.

 


Also relevant/see:


Report: 93% of Students Believe Gen AI Training Belongs in Degree Programs — from campustechnology.com by Rhea Kelly

The vast majority of today’s college students — 93% — believe generative AI training should be included in degree programs, according to a recent Coursera report. What’s more, 86% of students consider gen AI the most crucial technical skill for career preparation, prioritizing it above in-demand skills such as data strategy and software development. And 94% agree that microcredentials help build the essential skills they need to achieve career success.

For its Microcredentials Impact Report 2025, Coursera surveyed more than 1,200 learners and 1,000 employers around the globe to better understand the demand for microcredentials and their impact on workforce readiness and hiring trends.


1 in 4 employers say they’ll eliminate degree requirements by year’s end — from hrdive.com by Carolyn Crist
Companies that recently removed degree requirements reported a surge in applications, a more diverse applicant pool and the ability to offer lower salaries.

A quarter of employers surveyed said they will remove bachelor’s degree requirements for some roles by the end of 2025, according to a May 20 report from Resume Templates.

In addition, 7 in 10 hiring managers said their company looks at relevant experience over a bachelor’s degree while making hiring decisions.

In the survey of 1,000 hiring managers, 84% of companies that recently removed degree requirements said it has been a successful move. Companies without degree requirements also reported a surge in applications, a more diverse applicant pool and the ability to offer lower salaries.


Why AI literacy is now a core competency in education — from weforum.org by Tanya Milberg

  • Education systems must go beyond digital literacy and embrace AI literacy as a core educational priority.
  • A new AI Literacy Framework (AILit) aims to empower learners to navigate an AI-integrated world with confidence and purpose.
  • Here’s what you need to know about the AILit Framework – and how to get involved in making it a success.

Also from Allison Salisbury, see:

 

How To Get Hired During the AI Apocalypse — from kathleendelaski.substack.com by Kathleen deLaski
And other discussions to have with your kids on the way to college graduation

A less temporary, more existential threat to the four year degree: AI could hollow out the entry level job market for knowledge workers (i.e. new college grads). And if 56% of families were saying college “wasn’t worth it” in 2023,(WSJ), what will that number look like in 2026 or beyond? The one of my kids who went to college ended up working in a bike shop for a year-ish after graduation. No regrets, but it came as a shock to them that they weren’t more employable with their neuroscience degree.

A colleague provided a great example: Her son, newly graduated, went for a job interview as an entry level writer last month and he was asked, as a test, to produce a story with AI and then use that story to write a better one by himself. He would presumably be judged on his ability to prompt AI and then improve upon its product. Is that learning how to DO? I think so. It’s using AI tools to accomplish a workplace task.


Also relevant in terms of the job search, see the following gifted article:

‘We Are the Most Rejected Generation’ — from nytimes.com by David Brooks; gifted article
David talks admissions rates for selective colleges, ultra-hard to get summer internships, a tough entry into student clubs, and the job market.

Things get even worse when students leave school and enter the job market. They enter what I’ve come to think of as the seventh circle of Indeed hell. Applying for jobs online is easy, so you have millions of people sending hundreds of applications each into the great miasma of the internet, and God knows which impersonal algorithm is reading them. I keep hearing and reading stories about young people who applied to 400 jobs and got rejected by all of them.

It seems we’ve created a vast multilayered system that evaluates the worth of millions of young adults and, most of the time, tells them they are not up to snuff.

Many administrators and faculty members I’ve spoken to are mystified that students would create such an unforgiving set of status competitions. But the world of competitive exclusion is the world they know, so of course they are going to replicate it. 

And in this column I’m not even trying to cover the rejections experienced by the 94 percent of American students who don’t go to elite schools and don’t apply for internships at Goldman Sachs. By middle school, the system has told them that because they don’t do well on academic tests, they are not smart, not winners. That’s among the most brutal rejections our society has to offer.


Fiverr CEO explains alarming message to workers about AI — from iblnews.org
Fiverr CEO Micha Kaufman recently warned his employees about the impact of artificial intelligence on their jobs.

The Great Career Reinvention, and How Workers Can Keep Up — from workshift.org by Michael Rosenbaum

A wide range of roles can or will quickly be replaced with AI, including inside sales representatives, customer service representatives, junior lawyers, junior accountants, and physicians whose focus is diagnosis.


Behind the Curtain: A white-collar bloodbath — from axios.com by Jim VandeHei and Mike Allen

Dario Amodei — CEO of Anthropic, one of the world’s most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:

  • AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
  • Amodei said AI companies and government need to stop “sugar-coating” what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

Why it matters: Amodei, 42, who’s building the very technology he predicts could reorder society overnight, said he’s speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation.

 

AI & Schools: 4 Ways Artificial Intelligence Can Help Students — from the74million.org by W. Ian O’Byrne
AI creates potential for more personalized learning

I am a literacy educator and researcher, and here are four ways I believe these kinds of systems can be used to help students learn.

  1. Differentiated instruction
  2. Intelligent textbooks
  3. Improved assessment
  4. Personalized learning


5 Skills Kids (and Adults) Need in an AI World — from oreilly.com by Raffi Krikorian
Hint: Coding Isn’t One of Them

Five Essential Skills Kids Need (More than Coding)
I’m not saying we shouldn’t teach kids to code. It’s a useful skill. But these are the five true foundations that will serve them regardless of how technology evolves.

  1. Loving the journey, not just the destination
  2. Being a question-asker, not just an answer-getter
  3. Trying, failing, and trying differently
  4. Seeing the whole picture
  5. Walking in others’ shoes

The AI moment is now: Are teachers and students ready? — from iblnews.org

Day of AI Australia hosted a panel discussion on 20 May, 2025. Hosted by Dr Sebastian Sequoiah-Grayson (Senior Lecturer in the School of Computer Science and Engineering, UNSW Sydney) with panel members Katie Ford (Industry Executive – Higher Education at Microsoft), Tamara Templeton (Primary School Teacher, Townsville), Sarina Wilson (Teaching and Learning Coordinator – Emerging Technology at NSW Department of Education) and Professor Didar Zowghi (Senior Principal Research Scientist at CSIRO’s Data61).


Teachers using AI tools more regularly, survey finds — from iblnews.org

As many students face criticism and punishment for using artificial intelligence tools like ChatGPT for assignments, new reporting shows that many instructors are increasingly using those same programs.


Addendum on 5/28/25:

A Museum of Real Use: The Field Guide to Effective AI Use — from mikekentz.substack.com by Mike Kentz
Six Educators Annotate Their Real AI Use—and a Method Emerges for Benchmarking the Chats

Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.

With the right approach, a transcript becomes something else:

  • A window into student decision-making
  • A record of how understanding evolves
  • A conversation that can be interpreted and assessed
  • An opportunity to evaluate content understanding

This week, I’m excited to share something that brings that idea into practice.

Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.

This Field Guide is the first move in that direction.


 

Skilling Up for AI Transformation — from learningguild.com by Lauren Milstid and Megan Torrance

Lately, I’ve been in a lot of conversations—some casual, some strategy-deep—about what it takes to skill up teams for AI. One pattern keeps emerging: The organizations getting the most out of generative AI are the ones doing the most to support their people. They’re not just training on a single tool. They’re building the capacity to work with AI as a class of technology.

So let’s talk about that. Not the hype, but the real work of helping humans thrive in an AI-enabled workplace.


If Leadership Training Isn’t Applied, It Hasn’t Happened — from learningguild.com by Tim Samuels

L&D leadership training sessions often “feel” successful. A program is designed, a workshop is delivered, and employees leave feeling informed and engaged. But if that training isn’t applied in the workplace, did it actually happen? If we focus entirely on the “learning” but not the “development,” we’re wasting huge amounts of time and money. So let’s take a look at the current situation first.

The reality is stark; according to Harvard Business Review:

  • Only 12% of employees apply new skills learned in L&D programs
  • Just 25% believe their training measurably improved performance
  • We forget 75% of what we learn within six days unless we use it
 

Making AI Work: Leadership, Lab, and Crowd — from oneusefulthing.org by Ethan Mollick
A formula for AI in companies

How do we reconcile the first three points with the final one? The answer is that AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors who develop generalized approaches that address the issues of many companies at once. That won’t work here, at least for a while. Nobody has special information about how to best use AI at your company, or a playbook for how to integrate it into your organization.
.


Galileo Learn™ – A Revolutionary Approach To Corporate Learning — from joshbersin.com

Today we are excited to launch Galileo Learn™, a revolutionary new platform for corporate learning and professional development.

How do we leverage AI to revolutionize this model, doing away with the dated “publishing” model of training?

The answer is Galileo Learn, a radically new and different approach to corporate training and professional development.

What Exactly is Galileo Learn™?
Galileo Learn is an AI-native learning platform which is tightly integrated into the Galileo agent. It takes content in any form (PDF, word, audio, video, SCORM courses, and more) and automatically (with your guidance) builds courses, assessments, learning programs, polls, exercises, simulations, and a variety of other instructional formats.


Designing an Ecosystem of Resources to Foster AI Literacy With Duri Long — from aialoe.org

Centering Public Understanding in AI Education
In a recent talk titled “Designing an Ecosystem of Resources to Foster AI Literacy,” Duri Long, Assistant Professor at Northwestern University, highlighted the growing need for accessible, engaging learning experiences that empower the public to make informed decisions about artificial intelligence. Long emphasized that as AI technologies increasingly influence everyday life, fostering public understanding is not just beneficial—it’s essential. Her work seeks to develop a framework for AI literacy across varying audiences, from middle school students to adult learners and journalists.

A Design-Driven, Multi-Context Approach
Drawing from design research, cognitive science, and the learning sciences, Long presented a range of educational tools aimed at demystifying AI. Her team has created hands-on museum exhibits, such as Data Bites, where learners build physical datasets to explore how computers learn. These interactive experiences, along with web-based tools and support resources, are part of a broader initiative to bridge AI knowledge gaps using the 4As framework: Ask, Adapt, Author, and Analyze. Central to her approach is the belief that familiar, tangible interactions and interfaces reduce intimidation and promote deeper engagement with complex AI concepts.

 

AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice
Minnesota Legal Studies Research Paper No. 25-16; March 02, 2025; from papers.ssrn.com by:

Daniel Schwarcz
University of Minnesota Law School

Sam Manning
Centre for the Governance of AI

Patrick Barry
University of Michigan Law School

David R. Cleveland
University of Minnesota Law School

J.J. Prescott
University of Michigan Law School

Beverly Rich
Ogletree Deakins

Abstract

Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.


Guest post: How technological innovation can boost growth — from legaltechnology.com by Caroline Hill

One key change is the growing adoption of technology within legal service providers, and this is transforming the way firms operate and deliver value to clients.

The legal services sector’s digital transformation is gaining momentum, driven both by client expectations as well as the potential for operational efficiency. With the right support, legal firms can innovate through tech adoption and remain competitive to deliver strong client outcomes and long-term growth.


AI Can Do Many Tasks for Lawyers – But Be Careful — from nysba.org by Rebecca Melnitsky

Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.

The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.

For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.


Fresh Voices on Legal Tech with Nikki Shaver — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Nikki Shaver

Determining which legal technology is best for your law firm can seem like a daunting task, so Legaltech Hub does the hard work for you! In another edition of Fresh Voices, Dennis and Tom talk with Nikki Shaver, CEO at Legaltech Hub, about her in-depth knowledge of technology and AI trends. Nikki shares what effective tech strategies should look like for attorneys and recommends innovative tools for maintaining best practices in modern law firms. Learn more at legaltechnologyhub.com.


AI for in-house legal: 2025 predictions — from deloitte.com
Our expectations for AI engagement and adoption in the legal Market over the coming year.

AI will continue to transform in-house legal departments in 2025
As we enter 2025, over two-thirds of organisations plan to increase their Generative AI (GenAI) investments, providing legal teams with significant executive support and resources to further develop this Capabilities. This presents a substantial opportunity for legal departments, particularly as GenAI technology continues to advance at an impressive pace. We make five predictions for AI engagement and adoption in the legal Market over the coming year and beyond.


Navigating The Fine Line: Redefining Legal Advice In The Age Of Tech With Erin Levine And Quinten Steenhuis — from abovethelaw.com by Olga V. Mack
The definition of ‘practicing law’ is outdated and increasingly irrelevant in a tech-driven world. Should the line between legal advice and legal information even exist?

Practical Takeaways for Legal Leaders

  • Use Aggregated Data: Providing consumers with benchmarks (e.g., “90% of users in your position accepted similar settlements”) empowers them without giving direct legal advice.
  • Train and Supervise AI Tools: AI works best when it’s trained on reliable, localized data and supervised by legal professionals.
  • Partner with Courts: As Quinten pointed out, tools built in collaboration with courts often avoid UPL pitfalls. They’re also more likely to gain the trust of both regulators and consumers.
  • Embrace Transparency: Clear disclaimers like “This is not legal advice” go a long way in building consumer trust and meeting ethical standards.

 

 

Google I/O 2025: From research to reality — from blog.google
Here’s how we’re making AI more helpful with Gemini.


Google I/O 2025 LIVE — all the details about Android XR smart glasses, AI Mode, Veo 3, Gemini, Google Beam and more — from tomsguide.com by Philip Michaels
Google’s annual conference goes all in on AI

With a running time of 2 hours, Google I/O 2025 leaned heavily into Gemini and new models that make the assistant work in more places than ever before. Despite focusing the majority of the keynote around Gemini, Google saved its most ambitious and anticipated announcement towards the end with its big Android XR smart glasses reveal.

Shockingly, very little was spent around Android 16. Most of its Android 16 related news, like the redesigned Material 3 Expressive interface, was announced during the Android Show live stream last week — which explains why Google I/O 2025 was such an AI heavy showcase.

That’s because Google carved out most of the keynote to dive deeper into Gemini, its new models, and integrations with other Google services. There’s clearly a lot to unpack, so here’s all the biggest Google I/O 2025 announcements.


Our vision for building a universal AI assistant— from blog.google
We’re extending Gemini to become a world model that can make plans and imagine new experiences by simulating aspects of the world.

Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that’s intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device.

By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action.


Learn in newer, deeper ways with Gemini — from blog.google.com by Ben Gomes
We’re infusing LearnLM directly into Gemini 2.5 — plus more learning news from I/O.

At I/O 2025, we announced that we’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles. Educators and pedagogy experts preferred Gemini 2.5 Pro over other offerings across a range of learning scenarios, both for supporting a user’s learning goals and on key principles of good pedagogy.


Gemini gets more personal, proactive and powerful — from blog.google.com by Josh Woodward
It’s your turn to create, learn and explore with an AI assistant that’s starting to understand your world and anticipate your needs.

Here’s what we announced at Google IO:

  • Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.
  • Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.
  • Veo 3, our new, state-of-the-art video generation model, comes built in and is the first in the world to have native support for sound effects, background noises and dialogue between characters.
  • Deep Research and Canvas are getting their biggest updates yet, unlocking new ways to analyze information, create podcasts and vibe code websites and apps.
  • Gemini is coming to Chrome, so you can ask questions while browsing the web.
  • Students around the world can easily make interactive quizzes, and college students in the U.S., Brazil, Indonesia, Japan and the UK are eligible for a free school year of the Google AI Pro plan.
  • Google AI Ultra, a new premium plan, is for the pioneers who want the highest rate limits and early access to new features in the Gemini app.
  • 2.5 Flash has become our new default model, and it blends incredible quality with lightning fast response times.

Fuel your creativity with new generative media models and tools — from by Eli Collins
Introducing Veo 3 and Imagen 4, and a new tool for filmmaking called Flow.


AI in Search: Going beyond information to intelligence
We’re introducing new AI features to make it easier to ask any question in Search.

AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.

This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.

In this story:

  • AI Mode in Search
  • Deep Search
  • Live capabilities
  • Agentic capabilities
  • Shopping
  • Personal context
  • Custom charts

 

 

I’m a LinkedIn Executive. I See the Bottom Rung of the Career Ladder Breaking. — from nytimes.com by Aneesh Raman; this is a gifted article

There are growing signs that artificial intelligence poses a real threat to a substantial number of the jobs that normally serve as the first step for each new generation of young workers. Uncertainty around tariffs and global trade is likely to only accelerate that pressure, just as millions of 2025 graduates enter the work force.

Breaking first is the bottom rung of the career ladder. In tech, advanced coding tools are creeping into the tasks of writing simple code and debugging — the ways junior developers gain experience. In law firms, junior paralegals and first-year associates who once cut their teeth on document review are handing weeks of work over to A.I. tools to complete in a matter of hours. And across retailers, A.I. chatbots and automated customer service tools are taking on duties once assigned to young associates.

 

Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents  — from blogs.nvidia.com by Adel El Hallak
NVIDIA Enterprise AI Factory validated design and latest NVIDIA AI Blueprints help businesses add intelligent AI teammates that can speak, research and learn to their daily operations.

Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.

To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.

NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.


NVIDIA CEO Envisions AI Infrastructure Industry Worth ‘Trillions of Dollars’ — from blogs.nvidia.com by Brian Caulfield
In his COMPUTEX keynote, Huang unveiled a sweeping vision for an AI-powered future, showcasing new platforms and partnerships.

“AI is now infrastructure, and this infrastructure, just like the internet, just like electricity, needs factories,” Huang said. “These factories are essentially what we build today.”

“They’re not data centers of the past,” Huang added. “These AI data centers, if you will, are improperly described. They are, in fact, AI factories. You apply energy to it, and it produces something incredibly valuable, and these things are called tokens.”

More’s coming, Huang said, describing the growing power of AI to reason and perceive. That leads us to agentic AI — AI able to understand, think and act. Beyond that is physical AI — AI that understands the world. The phase after that, he said, is general robotics.


Everything Revealed at Nvidia’s 2025 Computex Press Conference in 19 Minutes — from mashable.com
Nvidia is creating Omniverse Digital Twins of factories including humanoid robots

Watch all the biggest announcements from Nvidia’s keynote address at Computex 2025 in Taipei, Taiwan.


Dell unveils new AI servers powered by Nvidia chips to boost enterprise adoption — from reuters.com

May 19 (Reuters) – Dell Technologies (DELL.N), opens new tab on Monday unveiled new servers powered by Nvidia’s (NVDA.O), opens new tab Blackwell Ultra chips, aiming to capitalize on the booming demand for artificial intelligence systems.

The servers, available in both air-cooled and liquid-cooled variations, support up to 192 Nvidia Blackwell Ultra chips but can be customized to include as many as 256 chips.


Nvidia announces humanoid robotics, custom AI infrastructure tech at Computex 2025 — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) rolled into this year’s Computex Taipei tech expo on Monday with several announcements, ranging from the development of humanoid robots to the opening up of its high-powered NVLink technology, which allows companies to build semi-custom AI servers with Nvidia’s infrastructure.

During the event on Monday, Nvidia revealed its Nvidia Isaac GR00T-Dreams, which the company says helps developers create enormous amounts of training data they can use to teach robots how to perform different behaviors and adapt to new environments.


Addendums on 5/22/25:


 

 

‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg
Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator

Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.

This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.


In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.


Make Instructional Design Fun Again with AI Agents — from drphilippahardman.substack.com by Dr. Philippa Hardman
A special edition practical guide to selecting & building AI agents for instructional design and L&D

Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”

In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.

Here’s a sneak peak….


AI Personality Matters: Why Claude Doesn’t Give Unsolicited Advice (And Why You Should Care) — from mikekentz.substack.com by Mike Kentz
First in a four-part series exploring the subtle yet profound differences between AI systems and their impact on human cognition

After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.

Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself: “Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.

The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.


Also re: metacognition and AI, see:

In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.

By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.


EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine
A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.

Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”

Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.


Partly Cloudy with a Chance of Chatbots — from derekbruff.org by Derek Bruff

Here are the predictions, our votes, and some commentary:

  • “By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
  • “Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
  • “Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
  • “AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
  • “AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.

‘We Have to Really Rethink the Purpose of Education’
The Ezra Klein Show

Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?

And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?

Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.


 

AI prompting secrets EXPOSED — from theneurondaily.com by Grant Harvey

Here are the three best prompting guides:

  • Anthropic’s “Prompt Engineering Overview is a free masterclass that’s worth its weight in gold. Their “constitutional AI prompting” section helped us create a content filter that actually works—unlike the one that kept flagging our coffee bean reviews as “inappropriate.” Apparently “rich body” triggered something…
  • OpenAI’s “Cookbook is like having a Michelin-star chef explain cooking—simple for beginners, but packed with pro techniques. Their JSON formatting examples saved us 3 hours of debugging last week…
  • Google’s “Prompt Design Strategies breaks down complex concepts with clear examples. Their before/after gallery showing how slight prompt tweaks improve results made us rethink everything we knew about getting quality outputs.

Pro tip: Save these guides as PDFs before they disappear behind paywalls. The best AI users keep libraries of these resources for quick reference.
.



My personal review of 10+ AI agents and what actually works — from aiwithallie.beehiiv.com by Allie K. Miller
The AI Agents Report Card you wish your boss gave you.

What you’ll learn in this newsletter:

  • Which AI agents actually deliver value right now
  • Where even the best agents still fall embarrassingly short
  • The surprising truth about those sleek, impressive interfaces
  • The economics of delegating to AI (and when it’s worth the premium)
  • Five practical takeaways to guide your AI strategy

Employees Keep Their AI-Driven Productivity a Secret — from hrotoday.com; via The Neuron

“To address this, organizations should consider building a sustainable AI governance model, prioritizing transparency, and tackling the complex challenge of AI-fueled imposter syndrome through reinvention. Employers who fail to approach innovation with empathy and provide employees with autonomy run the risk of losing valuable staff and negatively impacting employee productivity.”  

Key findings from the report include the following:

  • Employees are keeping their productivity gains a secret from their employers. …
  • In-office employees may still log in remotely after hours. …
  • Younger workers are more likely to switch jobs to gain more flexibility.

AI discovers new math algorithms — from by Zach Mink & Rowan Cheung
PLUS: Anthropic reportedly set to launch new Sonnet, Opus models

The Rundown: Google just debuted AlphaEvolve, a coding agent that harnesses Gemini and evolutionary strategies to craft algorithms for scientific and computational challenges — driving efficiency inside Google and solving historic math problems.

Why it matters: Yesterday, we had OpenAI’s Jakub Pachocki saying AI has shown “significant evidence” of being capable of novel insights, and today Google has taken that a step further. Math plays a role in nearly every aspect of life, and AI’s pattern and algorithmic strengths look ready to uncover a whole new world of scientific discovery.


AI agents are set to explode: Reports forecast 45% annual growth rate — from hrexecutive.com by Jill Barth

At the recent HR Executive and Future Talent Council event at Bentley University near Boston, I talked with Top 100 HR Tech Influencer Joey Price about what he’s hearing from HR leaders. Price is president and CEO of Jumpstart HR and executive analyst at Aspect43, Jumpstart HR’s HR?tech research division, and author of a valuable new book, The Power of HR: How to Make an Organizational Impact as a People?Professional.

This puts him solidly at the center of HR’s most relevant conversations. Price described the curiosity he’s hearing from many HR leaders about AI agents, which have become increasingly prominent in recent months.


 

 
© 2025 | Daniel Christian