Sam Altman kicks off DevDay 2025 with a keynote to explore ideas that will challenge how you think about building. Join us for announcements, live demos, and a vision of how developers are reshaping the future with AI.
Commentary from The Rundown AI:
Why it matters: OpenAI is turning ChatGPT into a do-it-all platform that might eventually act like a browser in itself, with users simply calling on the website/app they need and interacting directly within a conversation instead of navigating manually. The AgentKit will also compete and disrupt competitors like Zapier, n8n, Lindy, and others.
The 4 Rs framework Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.
The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.
… Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.
OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.
What Sora 2 can do
Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
Follow intricate multi-shot instructions while maintaining world state across scenes.
Create realistic background soundscapes, dialogue, and sound effects automatically.
Insert YOU into any video after a quick one-time recording (they call this “cameos”).
The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…
Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips
Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.
Also along the lines of creating digital video, see:
What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.
The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.
For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.
Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product.With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.
The numbers are stark: 92% of low-income Americans receive no help with substantial civil legal problems, while small claims filings have plummeted 32% in just four years. But AI is changing the game. By making legal procedures accessible to pro se litigants and supercharging legal aid organizations, these tools are reviving dormant disputes and opening courthouse doors that have been effectively closed to millions.
A growing number of U.S. law schools are now requiring students to train in artificial intelligence, marking a shift from optional electives to essential curriculum components. What was once treated as a “nice-to-have” skill is fast becoming integral as the legal profession adapts to the realities of AI tools.
From Experimentation to Obligation
Until recently, most law schools relegated AI instruction to upper-level electives or let individual professors decide whether to incorporate generative AI into their teaching. Now, however, at least eight law schools require incoming students—especially in their first year—to undergo training in AI, either during orientation, in legal research and writing classes, or via mandatory standalone courses.
Some of the institutions pioneering the shift include Fordham University, Arizona State University, Stetson University, Suffolk University, Washington University in St. Louis, Case Western, and the University of San Francisco.
There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.
…
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work.The results were compelling….
TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
The 10 Most Popular AI Chatbots For Educators — from techlearning.com by Erik Ofgang Educators don’t need to use each of these chatbots, but it pays to be generally aware of the most popular AI tools
I’ve spent time testing many of these AI chatbots for potential uses and abuses in my own classes, so here’s a quick look at each of the top 10 most popular AI chatbots, and what educators should know about each. If you’re looking for more detail on a specific chatbot, click the link, as either I or other Tech & Learning writers have done deeper dives on all these tools.
Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.
…
As experts in educational technology, digital literacy, and organizational change, we argue that higher education must seize this moment to rethink not just how we use AI, but how we structure and deliver learning altogether.
Over the past decade, microschools — experimental small schools that often have mixed-age classrooms — have expanded.
…
Some superintendents have touted the promise of microschools as a means for public schools to better serve their communities’ needs while still keeping children enrolled in the district. But under a federal administration that’s trying to dismantle public education and boost homeschool options, others have critiqued poor oversight and a lack of information for assessing these models.
Microschools offer a potential avenue to bring innovative, modern experiences to rural areas, argues Keith Parker, superintendent of Elizabeth City-Pasquotank Public Schools.
Imagining Teaching with AI Agents… — from michellekassorla.substack.com by Michelle Kassorla Teaching with AI is only one step toward educational change, what’s next?
More than two years ago I started teaching with AI in my classes. At first I taught against AI, then I taught with AI, and now I am moving into unknown territory: agents. I played with Manus and n8n and some other agents, but I really never got excited about them. They seemed more trouble than they were worth. It seemed they were no more than an AI taskbot overseeing some other AI bots, and that they weren’t truly collaborating. Now, I’m looking at Perplexity’s Comet browser and their AI agent and I’m starting to get ideas for what the future of education might hold.
I have written several times about the dangers of AI agents and how they fundamentally challenge our systems, especially online education. I know there is no way that we can effectively stop them–maybe slow them a little, but definitely not stop them. I am already seeing calls to block and ban agents–just like I saw (and still see) calls to block and ban AI–but the truth is they are the future of work and, therefore, the future of education.
So, yes! This is my next challenge: teaching with AI agents. I want to explore this idea, and as I started thinking about it, I got more and more excited. But let me back up a bit. What is an agent and how is it different than Generative AI or a bot?
Aiming to discover more about AI’s impact on the intellectual property (IP) field, Questel recently released the findings of its 2025 IP Outlook Research Report entitled “Pathways to Productivity: AI in IP”, the much-awaited follow-up to its inaugural 2024 study “Beyond the Hype: How Technology is Transforming IP.” The 2025 Report (“the Report”) polled over 500 patent and trademark professionals from various continents and countries across the globe.
As artificial intelligence reshapes the legal profession, both in-house and outside counsel face two major—but not unprecedented—challenges.
The first is how to harness transformative technology while maintaining the rigorous standards that define effective legal practice.
The second is how to ensure that new technology doesn’t impair the training and development of new lawyers.
Rigorous standards and apprenticeship are foundational aspects of lawyering. Preserving and integrating both into our use of AI will be essential to creating a stable and effective AI-enabled legal practice.
Every technology vendor pitching to law firms leads with the same promise: our solution will save you time. They’re lying, and they know it. The truth about AI in legal practice isn’t that it will reduce work. It’s that it will explode the volume of work while fundamentally changing what that work looks like.
New practice areas will emerge overnight. AI compliance law is already booming. Algorithmic discrimination cases are multiplying. Smart contract disputes need lawyers who understand both code and law. The metaverse needs property rights. Cryptocurrency needs regulation. Every technological advance creates legal questions that didn’t exist yesterday.
The skill shift will be brutal for lawyers who resist.
Finalists have been named for the 2025 American Legal Technology Awards, which honor exceptional achievement in various aspects of legal technology.
The awards recognize achievement in various categories related to legal technology, such as by a law firm, an individual, or an enterprise.
The awards will be presented on Oct. 15 at a gala dinner on the eve of the Clio Cloud Conference in Boston, Mass. The dinner will be held at Suffolk Law School.
Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.
Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.
This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”
… Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…
AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…
SKYLAND | AI Short Film Fantasy
Skyland is an AI-powered fantasy short film that takes you on a breathtaking journey with Aeryn Solveth and his loyal flying fish. From soaring above the futuristic city of Cybryne to returning to his homeland of Eryndor, Aeryn’s adventure is… https://t.co/Lz6UUxQvExpic.twitter.com/cYXs9nwTX3
What’s wild is that Dinda used a cocktail of AI tools (Adobe Firefly, MidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.
The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.
Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.
The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”
A new way to get things done with your AI browsing assistant Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.
Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.
We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.
That gap creates compliance risk and wasted investment. It leaves HR leaders with a critical question: How do you measure and validate real learning when AI is doing the work for employees?
Designing Training That AI Can’t Fake
Employees often find static slide decks and multiple-choice quizzes tedious, while AI can breeze through them. If employees would rather let AI take training for them, it’s a red flag about the content itself.
One of the biggest risks with agentic AI is disengagement. When AI can complete a task for employees, their incentive to engage disappears unless they understand why the skill matters, Rashid explains. Personalization and context are critical. Training should clearly connect to what employees value most – career mobility, advancement, and staying relevant in a fast-changing market.
Nearly half of executives believe today’s skills will expire within two years, making continuous learning essential for job security and growth. To make training engaging, Rashid recommends:
Delivering content in formats employees already consume – short videos, mobile-first modules, interactive simulations, or micro-podcasts that fit naturally into workflows. For frontline workers, this might mean replacing traditional desktop training with mobile content that integrates into their workday.
Aligning learning with tangible outcomes, like career opportunities or new responsibilities.
Layering in recognition, such as digital badges, leaderboards, or team shout-outs, to reinforce motivation and progress
Microsoft is pitching a recent shift of AI agents in Microsoft Teams as more than just smarter assistance. Instead, these agents are built to behave like human teammates inside familiar apps such as Teams, SharePoint, and Viva Engage. They can set up meeting agendas, keep files in order, and even step in to guide community discussions when things drift off track.
…
Unlike tools such as ChatGPT or Claude, which mostly wait for prompts, Microsoft’s agents are designed to take initiative. They can chase up unfinished work, highlight items that still need decisions, and keep projects moving forward. By drawing on Microsoft Graph, they also bring in the right files, past decisions, and context to make their suggestions more useful.
As an advisor to Aibrary, I am impressed with their educational philosophy, which is based both on theory and on empirical research findings. Aibrary is an innovative approach to self-directed learning that complements academic resources. Expanding our historic conceptions of books, libraries, and lifelong learning to new models enabled by emerging technologies is central to empowering all of us to shape our future. .
Why AI literacy must come before policy — from timeshighereducation.com by Kathryn MacCallum and David Parsons When developing rules and guidelines around the uses of artificial intelligence, the first question to ask is whether the university policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set
Literacy first, guidelines second, policy third
For students to respond appropriately to policies, they need to be given supportive guidelines that enact these policies. Further, to apply these guidelines, they need a level of AI literacy that gives them the knowledge, skills and understanding required to support responsible use of AI. Therefore, if we want AI to enhance education rather than undermine it, we must build literacy first, then create supportive guidelines. Good policy can then follow.
Sept 22 (Reuters) – At orientation last month, 375 new Fordham Law students were handed two summaries of rapper Drake’s defamation lawsuit against his rival Kendrick Lamar’s record label — one written by a law professor, the other by ChatGPT.
The students guessed which was which, then dissected the artificial intelligence chatbot’s version for accuracy and nuance, finding that it included some irrelevant facts.
The exercise was part of the first-ever AI session for incoming students at the Manhattan law school, one of at least eight law schools now incorporating AI training for first-year students in orientation, legal research and writing courses, or through mandatory standalone classes.
Well now, as the corporate learning market shifts to AI, (read the details in our study “The Revolution in Corporate Learning” ), Workday can jump ahead. This is because the $400 billion corporate training market is moving quickly to an AI-Native dynamic content approach (witness OpenAI’s launch of in-line learning in its chatbot). We’re just finishing a year-long study of this space and our detailed report and maturity model will be out in Q4. .
.
With Sana, and a few other AI-native vendors (Uplimit, Arist, Disperz, Docebo), companies can upload audios, videos, documents, and even interviews with experts and the system build learning programs in minutes. We use Sana for Galileo Learn (our AI-powered learning academy for Leadership and HR), and we now have 750+ courses and can build new programs in days instead of months.
And there’s more; this type of system gives every employee a personalized, chat-based experience to learn.
ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?
This week, an important piece of researchlanded that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.
The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).
If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.
We dive into the OpenAI study that found AI models scheme to avoid retraining.
AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)
And models were caught doing all sorts of other shady stuff:
Intentionally underperforming when told high scores would lead to capability removal.
Secretly breaking rules then lying about it to users.
Sabotaging code while claiming everything was fine.
Falsifying reports to avoid “shutdown” scenarios.
Recognizing evaluation environments and thinking “this is probably an alignment test.”
Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.
The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.
From DSC: This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?
A day in the life: The next 25 years A learner wakes up. Their AI-powered learning coach welcomes them, drawing their attention to their progress and helping them structure their approach to the day. A notification reminds them of an upcoming interview and suggests reflections to add to their learning portfolio.
Rather than a static gradebook, their portfolio is a dynamic, living record, curated by the student, validated by mentors in both industry and education, and enriched through co-creation with maturing modes of AI. It tells a story through essays, code, music, prototypes, journal reflections, and team collaborations. These artifacts are not “submitted”, they are published, shared, and linked to verifiable learning outcomes.
And when it’s time to move, to a new institution, a new job, or a new goal, their data goes with them, immutable, portable, verifiable, and meaningful.
From DSC: And I would add to that last solid sentence that the learner/student/employee will be able to control who can access this information. Anyway, some solid reflections here from Lev.
I know a lot of readers will disagree with this, and the timeline feels aggressive (the future always arrives more slowly than pundits expect) but I think the overall premise is sound: “The concept of a tipping point in education – where AI surpasses traditional schools as the dominant learning medium – is increasingly plausible based on current trends, technological advancements, and expert analyses.”
The Rundown: In this tutorial, you will learn how to combine NotebookLM with ChatGPT to master any subject faster, turning dense PDFs into interactive study materials with summaries, quizzes, and video explanations.
Step-by-step:
Go to notebooklm.google.com, click the “+” button, and upload your PDF study material (works best with textbooks or technical documents)
Choose your output mode: Summary for a quick overview, Mind Map for visual connections, or Video Overview for a podcast-style explainer with visuals
Generate a Study Guide under Reports — get Q&A sets, short-answer questions, essay prompts, and glossaries of key terms automatically
Take your PDF to ChatGPT and prompt: “Read this chapter by chapter and highlight confusing parts” or “Quiz me on the most important concepts”
Combine both tools: Use NotebookLM for quick context and interactive guides, then ChatGPT to clarify tricky parts and go deeperPro Tip: If your source is in EPUB or audiobook, convert it to PDF before uploading. Both NotebookLM and ChatGPT handle PDFs best.
Claude can now create and edit Excel spreadsheets, documents, PowerPoint slide decks, and PDFs directly in Claude.ai and the desktop app. This transforms how you work with Claude—instead of only receiving text responses or in-app artifacts, you can describe what you need, upload relevant data, and get ready-to-use files in return.
Also see:
Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic — from techcrunch.com byRebecca Bellan
Microsoft will pay to use Anthropic’s AI in Office 365 apps, The Information reports, citing two sources. The move means that Anthropic’s tech will help power new features in Word, Excel, Outlook, and PowerPoint alongside OpenAI’s, marking the end of Microsoft’s previous reliance solely on the ChatGPT maker for its productivity suite. Microsoft’s move to diversify its AI partnerships comes amid a growing rift with OpenAI, which has pursued its own infrastructure projects as well as a potential LinkedIn competitor.
In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.
Data from the State of AI and Instructional Design Report revealed that 95.3% of the instructional designers interviewed use AI in their daily work [1]. And over 85% of this AI use occurs during the design and development process.
These figures showcase the immense impact AI is already having on the instructional design world.
If you’re an L&D professional still on the fence about adding AI to your workflow or an AI convert looking for the next best tools, keep reading.
This guide breaks down 5 of the top AI tools for instructional designers in 2025, so you can streamline your development processes and build better training faster.
But before we dive into the tools of the trade, let’s address the elephant in the room:
GRAND RAPIDS, MI — A new course at Grand Rapids Community College aims to help students learn about artificial intelligence by using the technology to solve real-world business problems.
…
In a release, the college said its grant application was supported by 20 local businesses, including Gentex, TwistThink and the Grand Rapids Public Museum. The businesses have pledged to work with students who will use business data to develop an AI project such as a chatbot that interacts with customers, or a program that automates social media posts or summarizes customer data.
“This rapidly emerging technology can transform the way businesses process data and information,” Kristi Haik, dean of GRCC’s School of Science, Technology, Engineering and Mathematics, said in a statement. “We want to help our local business partners understand and apply the technology. We also want to create real experiences for our students so they enter the workforce with demonstrated competence in AI applications.”
As Patrick Bailey said on LinkedIn about this article:
Nice to see a pedagogy that’s setting a forward movement rather than focusing on what could go wrong with AI in a curriculum.
As a 30 year observer and participant, it seems to me that previous technology platform shifts like SaaS and mobile did not fundamentally change the LMS. AI is different. We’re standing at the precipice of LMS 2.0, where the branding change from Course Management System to Learning Management System will finally live up to its name. Unlike SaaS or mobile, AI represents a technology platform shift that will transform the way participants interact with learning systems – and with it, the nature of the LMS itself.
Given the transformational potential of AI, it is useful to set the context and think about how we got here, especially on this 30th anniversary of the LMS.
Where AI is disruptive is in its ability to introduce a whole new set of capabilities that are best described as personalized learning services. AI offers a new value proposition to the LMS, roughly the set of capabilities currently being developed in the AI Tutor / agentic TA segment. These new capabilities are so valuable given their impact on learning that I predict they will become the services with greatest engagement within a school or university’s “enterprise” instructional platform.
In this way, by LMS paradigm shift, I specifically mean a shift from buyers valuing the product on its course-centric and course management capabilities, to valuing it on its learner-centric and personalized learning capabilities.
This anthology reveals how the integration of AI in education poses profound philosophical, pedagogical, ethical and political questions. As this global AI ecosystem evolves and becomes increasingly ubiquitous, UNESCO and its partners have a shared responsibility to lead the global discourse towards an equity- and justice-centred agenda. The volume highlights three areas in which UNESCO will continue to convene and lead a global commons for dialog and action particularly in areas on AI futures, policy and practice innovation, and experimentation.
As guardian of ethical, equitable human-centred AI in education.
As thought leader in reimagining curriculum and pedagogy
As a platform for engaging pluralistic and contested dialogues
AI, copyright and the classroom: what higher education needs to know — from timeshighereducation.com by Cayce Myers As artificial intelligence reshapes teaching and research, one legal principle remains at the heart of our work: copyright. Understanding its implications isn’t just about compliance – it’s about protecting academic integrity, intellectual property and the future of knowledge creation. Cayce Myers explains
Why It Matters A decade from now, we won’t say “AI changed schools.” We’ll say: this was the year schools began to change what it means to be human, augmented by AI.
This transformation isn’t about efficiency alone. It’s about dignity, creativity, and discovery, and connecting education more directly to human flourishing. The industrial age gave us schools to produce cookie-cutter workers. The digital age gave us knowledge anywhere, anytime. The AI age—beginning now—gives us back what matters most: the chance for every learner to become infinitely capable.
This fall may look like any other—bells ringing, rows of desks—but beneath the surface, education has begun its greatest transformation since the one-room schoolhouse.
Transactional and transformational leaderships’ combined impact on AI and trust Given the volatile times we live in, a leader may find themselves in a situation where they know how they will use AI, but they are not entirely clear on the goals and journey. In a teaching context, students can be given scenarios where they must lead a team, including autonomous AI agents, to achieve goals. They can then analyse the situations and decide what leadership styles to apply and how to build trust in their human team members. Educators can illustrate this decision-making process using a table (see above).
They may need to combine transactional leadership with transformational leadership, for example. Transactional leadership focuses on planning, communicating tasks clearly and an exchange of value. This works well with both humans and automated AI agents.
Real, capability-building learning requires three key elements: content, context and conversation.
The Rise Of AI Agents: Teaching At Scale
The generative AI revolution is often framed in terms of efficiency: faster content creation, automated processes and streamlined workflows. But in the world of L&D, its most transformative potential lies elsewhere: the ability to scale great teaching.
AI gives us the means to replicate the role of an effective teacher across an entire organization. Specifically, AI agents—purpose-built systems that understand, adapt and interact in meaningful, context-aware ways—can make this possible. These tools understand a learner’s role, skill level and goals, then tailor guidance to their specific challenges and adapt dynamically over time. They also reinforce learning continuously, nudging progress and supporting application in the flow of work.
More than simply sharing knowledge, an AI agent can help learners apply it and improve with every interaction. For example, a sales manager can use a learning agent to simulate tough customer scenarios, receive instant feedback based on company best practices and reinforce key techniques. A new hire in the product department could get guidance on the features and on how to communicate value clearly in a roadmap meeting.
In short, AI agents bring together the three essential elements of capability building, not in a one-size-fits-all curriculum but on demand and personalized for every learner. While, obviously, this technology shouldn’t replace human expertise, it can be an effective tool for removing bottlenecks and unlocking effective learning at scale.