Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.
AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.
Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.
The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.
The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.
The details:
Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.
Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.
Learning and development professionals face unprecedented challenges in today’s rapidly evolving business landscape. According to LinkedIn’s 2025 Workplace Learning Report, 67 percent of L&D professionals report being “maxed out” on capacity, while 66 percent have experienced budget reductions in the past year.
Despite these constraints, 87 percent agree their organizations need to develop employees faster to keep pace with business demands. These statistics paint a clear picture of the pressure L&D teams face: do more, with less, faster.
This article explores how one L&D leader’s strategic partnership with artificial intelligence transformed these persistent challenges into opportunities, creating a responsive learning ecosystem that addresses the modern demands of rapid product evolution and diverse audience needs. With 71 percent of L&D professionals now identifying AI as a high or very high priority for their learning strategy, this case study demonstrates how AI can serve not merely as a tool but as a collaborative partner in reimagining content development and management. .
How we use GenAI and AR to improve students’ design skills— from timeshighereducation.com by Antonio Juarez, Lesly Pliego and Jordi Rábago who are professors of architecture at Monterrey Institute of Technology in Mexico; Tomas Pachajoa is a professor of architecture at the El Bosque University in Colombia; & Carlos Hinrichsen and Marietta Castro are educators at San Sebastián University in Chile. Guidance on using generative AI and augmented reality to enhance student creativity, spatial awareness and interdisciplinary collaboration
Blend traditional skills development with AI use For subjects that require students to develop drawing and modelling skills, have students create initial design sketches or models manually to ensure they practise these skills. Then, introduce GenAI tools such as Midjourney, Leonardo AI and ChatGPT to help students explore new ideas based on their original concepts. Using AI at this stage broadens their creative horizons and introduces innovative perspectives, which are crucial in a rapidly evolving creative industry.
Provide step-by-step tutorials, including both written guides and video demonstrations, to illustrate how initial sketches can be effectively translated into AI-generated concepts. Offer example prompts to demonstrate diverse design possibilities and help students build confidence using GenAI.
Integrating generative AI and AR consistently enhanced student engagement, creativity and spatial understanding on our course.
How Texas is Preparing Higher Education for AI — from the74million.org by Kate McGee TX colleges are thinking about how to prepare students for a changing workforce and an already overburdened faculty for new challenges in classrooms.
“It doesn’t matter if you enter the health industry, banking, oil and gas, or national security enterprises like we have here in San Antonio,” Eighmy told The Texas Tribune. “Everybody’s asking for competency around AI.”
It’s one of the reasons the public university, which serves 34,000 students, announced earlier this year that it is creating a new college dedicated to AI, cyber security, computing and data science. The new college, which is still in the planning phase, would be one of the first of its kind in the country. UTSA wants to launch the new college by fall 2025.
But many state higher education leaders are thinking beyond that. As AI becomes a part of everyday life in new, unpredictable ways, universities across Texas and the country are also starting to consider how to ensure faculty are keeping up with the new technology and students are ready to use it when they enter the workforce.
To develop a robust policy for generative artificial intelligence use in higher education, institutional leaders must first create “a room” where diverse perspectives are welcome and included in the process.
Q: How do you expect to see AI embraced more in the future in college and the workplace?
I do believe it’s going to become a permanent fixture for multiple reasons. I think the national security imperative associated with AI as a result of competing against other nations is going to drive a lot of energy and support for AI education. We also see shifts across every field and discipline regarding the usage of AI beyond college. We see this in a broad array of fields, including health care and the field of law. I think it’s here to stay and I think that means we’re going to see AI literacy being taught at most colleges and universities, and more faculty leveraging AI to help improve the quality of their instruction. I feel like we’re just at the beginning of a transition. In fact, I often describe our current moment as the ‘Ask Jeeves’ phase of the growth of AI. There’s a lot of change still ahead of us. AI, for better or worse, it’s here to stay.
A new study from Drexel University and Google has demonstrated that AI-generated educational podcasts can significantly enhance both student engagement and learning outcomes compared to traditional textbooks. The research, involving 180 college students across the United States, represents one of the first systematic investigations into how artificial intelligence can transform educational content delivery in real-time.
Interrogate the Process: We can ask ourselves if we I built in enough checkpoints. Steps that can’t be faked. Things like quick writes, question floods, in-person feedback, revision logs.
Reframe AI: We can let students use AI as a partner. We can show them how to prompt better, revise harder, and build from it rather than submit it. Show them the difference between using a tool and being used by one.
Design Assignments for Curiosity, Not Compliance: Even the best of our assignments need to adapt. Mine needs more checkpoints, more reflective questions along the way, more explanation of why my students made the choices they did.
The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.
One thing is clear: teachers are not OK.
…
In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear “don’t use generative AI” from a prof but then log on to the university’s Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It’s inconsistent and confusing.
I am sick to my stomach as I write this because I’ve spent 20 years developing a pedagogy that’s about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It’s demoralizing.
How do we reconcile the first three points with the final one? The answer is that AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors who develop generalized approaches that address the issues of many companies at once. That won’t work here, at least for a while. Nobody has special information about how to best use AI at your company, or a playbook for how to integrate it into your organization. .
Today we are excited to launch Galileo Learn™, a revolutionary new platform for corporate learning and professional development.
…
How do we leverage AI to revolutionize this model, doing away with the dated “publishing” model of training?
The answer is Galileo Learn, a radically new and different approach to corporate training and professional development.
…
What Exactly is Galileo Learn™? Galileo Learn is an AI-native learning platform which is tightly integrated into the Galileo agent. It takes content in any form (PDF, word, audio, video, SCORM courses, and more) and automatically (with your guidance) builds courses, assessments, learning programs, polls, exercises, simulations, and a variety of other instructional formats.
Centering Public Understanding in AI Education
In a recent talk titled “Designing an Ecosystem of Resources to Foster AI Literacy,” Duri Long, Assistant Professor at Northwestern University, highlighted the growing need for accessible, engaging learning experiences that empower the public to make informed decisions about artificial intelligence. Long emphasized that as AI technologies increasingly influence everyday life, fostering public understanding is not just beneficial—it’s essential. Her work seeks to develop a framework for AI literacy across varying audiences, from middle school students to adult learners and journalists.
A Design-Driven, Multi-Context Approach
Drawing from design research, cognitive science, and the learning sciences, Long presented a range of educational tools aimed at demystifying AI. Her team has created hands-on museum exhibits, such as Data Bites, where learners build physical datasets to explore how computers learn. These interactive experiences, along with web-based tools and support resources, are part of a broader initiative to bridge AI knowledge gaps using the 4As framework: Ask, Adapt, Author, and Analyze. Central to her approach is the belief that familiar, tangible interactions and interfaces reduce intimidation and promote deeper engagement with complex AI concepts.
Daniel Schwarcz
University of Minnesota Law School
Sam Manning
Centre for the Governance of AI
Patrick Barry
University of Michigan Law School
David R. Cleveland
University of Minnesota Law School
J.J. Prescott
University of Michigan Law School
Beverly Rich
Ogletree Deakins
Abstract
Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.
One key change is the growing adoption of technology within legal service providers, and this is transforming the way firms operate and deliver value to clients.
The legal services sector’s digital transformation is gaining momentum, driven both by client expectations as well as the potential for operational efficiency. With the right support, legal firms can innovate through tech adoption and remain competitive to deliver strong client outcomes and long-term growth.
Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.
The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.
For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.
Determining which legal technology is best for your law firm can seem like a daunting task, so Legaltech Hub does the hard work for you! In another edition of Fresh Voices, Dennis and Tom talk with Nikki Shaver, CEO at Legaltech Hub, about her in-depth knowledge of technology and AI trends. Nikki shares what effective tech strategies should look like for attorneys and recommends innovative tools for maintaining best practices in modern law firms. Learn more at legaltechnologyhub.com.
AI will continue to transform in-house legal departments in 2025
As we enter 2025, over two-thirds of organisations plan to increase their Generative AI (GenAI) investments, providing legal teams with significant executive support and resources to further develop this Capabilities. This presents a substantial opportunity for legal departments, particularly as GenAI technology continues to advance at an impressive pace. We make five predictions for AI engagement and adoption in the legal Market over the coming year and beyond.
Use Aggregated Data: Providing consumers with benchmarks (e.g., “90% of users in your position accepted similar settlements”) empowers them without giving direct legal advice.
Train and Supervise AI Tools: AI works best when it’s trained on reliable, localized data and supervised by legal professionals.
Partner with Courts: As Quinten pointed out, tools built in collaboration with courts often avoid UPL pitfalls. They’re also more likely to gain the trust of both regulators and consumers.
Embrace Transparency: Clear disclaimers like “This is not legal advice” go a long way in building consumer trust and meeting ethical standards.
With a running time of 2 hours, Google I/O 2025 leaned heavily into Gemini and new models that make the assistant work in more places than ever before. Despite focusing the majority of the keynote around Gemini, Google saved its most ambitious and anticipated announcement towards the end with its big Android XR smart glasses reveal.
Shockingly, very little was spent around Android 16. Most of its Android 16 related news, like the redesigned Material 3 Expressive interface, was announced during the Android Show live stream last week — which explains why Google I/O 2025 was such an AI heavy showcase.
That’s because Google carved out most of the keynote to dive deeper into Gemini, its new models, and integrations with other Google services. There’s clearly a lot to unpack, so here’s all the biggest Google I/O 2025 announcements.
Our vision for building a universal AI assistant— from blog.google We’re extending Gemini to become a world model that can make plans and imagine new experiences by simulating aspects of the world.
Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that’s intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device.
By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action.
Learn in newer, deeper ways with Gemini — from blog.google.com by Ben Gomes We’re infusing LearnLM directly into Gemini 2.5 — plus more learning news from I/O.
At I/O 2025, we announced that we’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles. Educators and pedagogy experts preferred Gemini 2.5 Pro over other offerings across a range of learning scenarios, both for supporting a user’s learning goals and on key principles of good pedagogy.
Gemini gets more personal, proactive and powerful— from blog.google.com by Josh Woodward It’s your turn to create, learn and explore with an AI assistant that’s starting to understand your world and anticipate your needs.
Here’s what we announced at Google IO:
Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.
Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.
Veo 3, our new, state-of-the-art video generation model, comes built in and is the first in the world to have native support for sound effects, background noises and dialogue between characters.
Deep Research and Canvas are getting their biggest updates yet, unlocking new ways to analyze information, create podcasts and vibe code websites and apps.
Gemini is coming to Chrome, so you can ask questions while browsing the web.
Students around the world can easily make interactive quizzes, and college students in the U.S., Brazil, Indonesia, Japan and the UK are eligible for a free school year of the Google AI Pro plan.
Google AI Ultra, a new premium plan, is for the pioneers who want the highest rate limits and early access to new features in the Gemini app.
2.5 Flash has become our new default model, and it blends incredible quality with lightning fast response times.
AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.
This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.
Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.
To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.
NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.
“AI is now infrastructure, and this infrastructure, just like the internet, just like electricity, needs factories,” Huang said. “These factories are essentially what we build today.”
“They’re not data centers of the past,” Huang added. “These AI data centers, if you will, are improperly described. They are, in fact, AI factories. You apply energy to it, and it produces something incredibly valuable, and these things are called tokens.”
More’s coming, Huang said, describing the growing power of AI to reason and perceive. That leads us to agentic AI — AI able to understand, think and act. Beyond that is physical AI — AI that understands the world. The phase after that, he said, is general robotics.
May 19 (Reuters) – Dell Technologies (DELL.N), opens new tab on Monday unveiled new servers powered by Nvidia’s (NVDA.O), opens new tab Blackwell Ultra chips, aiming to capitalize on the booming demand for artificial intelligence systems.
The servers, available in both air-cooled and liquid-cooled variations, support up to 192 Nvidia Blackwell Ultra chips but can be customized to include as many as 256 chips.
Nvidia (NVDA) rolled into this year’s Computex Taipei tech expo on Monday with several announcements, ranging from the development of humanoid robots to the opening up of its high-powered NVLink technology, which allows companies to build semi-custom AI servers with Nvidia’s infrastructure.
…
During the event on Monday, Nvidia revealed its Nvidia Isaac GR00T-Dreams, which the company says helps developers create enormous amounts of training data they can use to teach robots how to perform different behaviors and adapt to new environments.
‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator
Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.
This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.
In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn
The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.
Make Instructional Design Fun Again with AI Agents— from drphilippahardman.substack.com by Dr. Philippa Hardman A special edition practical guide to selecting & building AI agents for instructional design and L&D
Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”
In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.
After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.
… Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself:“Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.
The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.
Also re: metacognition and AI, see:
In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn
The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.
By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.
EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.
Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”
Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.
Here are the predictions, our votes, and some commentary:
“By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
“Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
“Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
“AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
“AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.
Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?
And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?
Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.
Anthropic’s “Prompt Engineering Overview” is a free masterclass that’s worth its weight in gold. Their “constitutional AI prompting” section helped us create a content filter that actually works—unlike the one that kept flagging our coffee bean reviews as “inappropriate.” Apparently “rich body” triggered something…
OpenAI’s “Cookbook” is like having a Michelin-star chef explain cooking—simple for beginners, but packed with pro techniques. Their JSON formatting examples saved us 3 hours of debugging last week…
Google’s “Prompt Design Strategies” breaks down complex concepts with clear examples. Their before/after gallery showing how slight prompt tweaks improve results made us rethink everything we knew about getting quality outputs.
Pro tip: Save these guides as PDFs before they disappear behind paywalls. The best AI users keep libraries of these resources for quick reference. .
“To address this, organizations should consider building a sustainable AI governance model, prioritizing transparency, and tackling the complex challenge of AI-fueled imposter syndrome through reinvention. Employers who fail to approach innovation with empathy and provide employees with autonomy run the risk of losing valuable staff and negatively impacting employee productivity.”
Key findings from the report include the following:
Employees are keeping their productivity gains a secret from their employers. …
In-office employees may still log in remotely after hours. …
Younger workers are more likely to switch jobs to gain more flexibility.
AI discovers new math algorithms— from by Zach Mink & Rowan Cheung PLUS: Anthropic reportedly set to launch new Sonnet, Opus models
The Rundown: Google just debuted AlphaEvolve, a coding agent that harnesses Gemini and evolutionary strategies to craft algorithms for scientific and computational challenges — driving efficiency inside Google and solving historic math problems.
… Why it matters: Yesterday, we had OpenAI’s Jakub Pachocki saying AI has shown “significant evidence” of being capable of novel insights, and today Google has taken that a step further. Math plays a role in nearly every aspect of life, and AI’s pattern and algorithmic strengths look ready to uncover a whole new world of scientific discovery.
At the recent HR Executive and Future Talent Council event at Bentley University near Boston, I talked with Top 100 HR Tech Influencer Joey Price about what he’s hearing from HR leaders. Price is president and CEO of Jumpstart HR and executive analyst at Aspect43, Jumpstart HR’s HR?tech research division, and author of a valuable new book, The Power of HR: How to Make an Organizational Impact as a People?Professional.
This puts him solidly at the center of HR’s most relevant conversations. Price described the curiosity he’s hearing from many HR leaders about AI agents, which have become increasingly prominent in recent months.
Searching for a job can feel overwhelming, but AI-powered tools are making the process faster, smarter, and more personalized than ever. Whether you’re optimizing your resume, crafting a tailored cover letter, or keeping track of applications, artificial intelligence is transforming how job seekers market themselves and navigate the job search with precision.
Note: These tools are rapidly evolving. Many are backed by major investments, which means new features and improvements are rolling out regularly. If you’re just starting your job search or looking to sharpen your strategy, these tools are worth exploring.
.Get the 2025 Student Guide to Artificial Intelligence — from studentguidetoai.org This guide is made available under a Creative Commons license by Elon University and the American Association of Colleges and Universities (AAC&U). .
Agentic AI is taking these already huge strides even further. Rather than simply asking a question and receiving an answer, an AI agent can assess your current level of understanding and tailor a reply to help you learn. They can also help you come up with a timetable and personalized lesson plan to make you feel as though you have a one-on-one instructor walking you through the process. If your goal is to learn to speak a new language, for example, an agent might map out a plan starting with basic vocabulary and pronunciation exercises, then progress to simple conversations, grammar rules and finally, real-world listening and speaking practice.
…
For instance, if you’re an entrepreneur looking to sharpen your leadership skills, an AI agent might suggest a mix of foundational books, insightful TED Talks and case studies on high-performing executives. If you’re aiming to master data analysis, it might point you toward hands-on coding exercises, interactive tutorials and real-world datasets to practice with.
The beauty of AI-driven learning is that it’s adaptive. As you gain proficiency, your AI coach can shift its recommendations, challenge you with new concepts and even simulate real-world scenarios to deepen your understanding.
Ironically, the very technology feared by workers can also be leveraged to help them. Rather than requiring expensive external training programs or lengthy in-person workshops, AI agents can deliver personalized, on-demand learning paths tailored to each employee’s role, skill level, and career aspirations. Given that 68% of employees find today’s workplace training to be overly “one-size-fits-all,” an AI-driven approach will not only cut costs and save time but will be more effective.
This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.
Can we sustain those AI-free moments for an hour? A class session? Longer?
…
Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.
However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.
Duolingo’s AI Revolution — from drphilippahardman.substack.com by Dr. Philippa Hardman What 148 AI-Generated Courses Tell Us About the Future of Instructional Design & Human Learning
Last week, Duolingo announced an unprecedented expansion: 148 new language courses created using generative AI, effectively doubling their content library in just one year. This represents a seismic shift in how learning content is created — a process that previously took the company 12 years for their first 100 courses.
As CEO Luis von Ahn stated in the announcement, “This is a great example of how generative AI can directly benefit our learners… allowing us to scale at unprecedented speed and quality.”
In this week’s blog, I’ll dissect exactly how Duolingo has reimagined instructional design through AI, what this means for the learner experience, and most importantly, what it tells us about the future of our profession.
Medical education is experiencing a quiet revolution—one that’s not taking place in lecture theatres or textbooks, but with headsets and holograms. At the heart of this revolution are Mixed Reality (MR) AI Agents, a new generation of devices that combine the immersive depth of mixed reality with the flexibility of artificial intelligence. These technologies are not mere flashy gadgets; they’re revolutionising the way medical students interact with complicated content, rehearse clinical skills, and prepare for real-world situations. By combining digital simulations with the physical world, MR AI Agents are redefining what it means to learn medicine in the 21st century.
4 Reasons To Use Claude AI to Teach — from techlearning.com by Erik Ofgang Features that make Claude AI appealing to educators include a focus on privacy and conversational style.
After experimenting using Claude AI on various teaching exercises, from generating quizzes to tutoring and offering writing suggestions, I found that it’s not perfect, but I think it behaves favorably compared to other AI tools in general, with an easy-to-use interface and some unique features that make it particularly suited for use in education.
So this edition is simple: a quick, practical guide to the major generative AI models available in 2025 so far. What they’re good at, what to use them for, and where they might fit into your legal work—from document summarization to client communication to research support.
From DSC: This comprehensive, highly informational posting lists what the model is, its strengths, the best legal use cases for it, and responsible use tips as well.
Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more.
Later, the guys answer a listener’s question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated.
If you follow legal tech at all, you would be justified in suspecting that Tom Martin has figured out how to use artificial intelligence to clone himself.
While running LawDroid, his legal tech company, the Vancouver-based Martin also still manages a law practice in California, oversees an annual legal tech awards program, teaches a law school course on generative AI, runs an annual AI conference, hosts a podcast, and recently launched a legal tech consultancy.
In January 2023, less than two months after ChatGPT first launched, Martin’s company was one of the first to launch a gen AI assistant specifically for lawyers, called LawDroid Copilot. He has since also launched LawDroid Builder, a no-code platform for creating custom AI agents.
In a profession that’s actively contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart
Artificial intelligence (AI) is here. And it’s already reshaping the way law firms operate. Whether automating repetitive tasks, improving risk management, or boosting efficiency, AI presents a genuine opportunity for forward-thinking legal practices. But with new opportunities come new responsibilities. And as firms explore AI tools, it’s essential they consider how to govern them safely and ethically. That’s where an AI policy becomes indispensable.
So, what can AI actually do for your firm right now? Let’s take a closer look.
DC: THIS could unfortunately be the ROI companies will get from large investments in #AI — reduced headcount/employees/contract workers. https://t.co/zEWlqCSWzI
Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.
According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”
Something strange, and potentially alarming, is happening to the job market for young, educated workers.
According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.
What’s going on? I see three plausible explanations, and each might be a little bit true.
The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.
Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.
Key takeaway: Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be.Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.
…
A.4 How will AI disrupt this segment? Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.
No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”
Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?
But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.
…
In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.
This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”
I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!
DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.
What’s impressive about it?
Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.
The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.
From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25
All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.
That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.
… Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”
FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.
MOOC-Style Skills Training— from the-job.beehiiv.com by Paul Fain WGU and tech companies use Open edX for flexible online learning. Could community colleges be next?
Open Source for Affordable Online Reach
The online titan Western Governors University is experimenting with an open-source learning platform. So are Verizon and the Indian government. And the platform’s leaders want to help community colleges take the plunge on competency-based education.
…
The Open edX platform inherently supports self-paced learning and offers several features that make it a good fit for competency-based education and skills-forward learning, says Stephanie Khurana, Axim’s CEO.
“Flexible modalities and a focus on competence instead of time spent learning improves access and affordability for learners who balance work and life responsibilities alongside their education,” she says.
“Plus, being open source means institutions and organizations can collaborate to build and share CBE-specific tools and features,” she says, “which could lower costs and speed up innovation across the field.”
Axim thinks Open edX’s ability to scale affordably can support community colleges in reaching working learners across an underserved market.
We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.
As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.
To help leaders understand how knowledge work will evolve, Microsoft analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor market trends, and trillions of Microsoft 365 productivity signals. We also spoke with AI-native startups, academics, economists, scientists, and thought leaders to explore what work could become. The data and insights point to the emergence of an entirely new organization, a Frontier Firm that looks markedly different from those we know today. Structured around on-demand intelligence and powered by “hybrid” teams of humans + agents, these companies scale rapidly, operate with agility, and generate value faster.
Frontier Firms are already taking shape, and within the next 2–5 years we expect that every organization will be on their journey to becoming one. 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. Adoption is accelerating: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.
The time to act is now. The question for every leader and employee is: how will you adapt?
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.
The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.