Custom AI Development: Evolving from Static AI Systems to Dynamic Learning Agents in 2025 — community.nasscom.in

This blog explores how custom AI development accelerates the evolution from static AI to dynamic learning agents and why this transformation is critical for driving innovation, efficiency, and competitive advantage.

Dynamic Learning Agents: The Next Generation
Dynamic learning agents, sometimes referred to as adaptive or agentic AI, represent a leap forward. They combine continuous learningautonomous action, and context-aware adaptability.

Custom AI development plays a crucial role here: it ensures that these agents are designed specifically for an enterprise’s unique needs rather than relying on generic, one-size-fits-all AI platforms. Tailored dynamic agents can:

  • Continuously learn from incoming data streams
  • Make autonomous, goal-directed decisions aligned with business objectives
  • Adapt behavior in real time based on context and feedback
  • Collaborate with other AI agents and human teams to solve complex challenges

The result is an AI ecosystem that evolves with the business, providing sustained competitive advantage.

Also from community.nasscom.in, see:

Building AI Agents with Multimodal Models: From Perception to Action

Perception: The Foundation of Intelligent Agents
Perception is the first step in building AI agents. It involves capturing and interpreting data from multiple modalities, including text, images, audio, and structured inputs. A multimodal AI agent relies on this comprehensive understanding to make informed decisions.

For example, in healthcare, an AI agent may process electronic health records (text), MRI scans (vision), and patient audio consultations (speech) to build a complete understanding of a patient’s condition. Similarly, in retail, AI agents can analyze purchase histories (structured data), product images (vision), and customer reviews (text) to inform recommendations and marketing strategies.

Effective perception ensures that AI agents have contextual awareness, which is essential for accurate reasoning and appropriate action.


From 70-20-10 to 90-10: a new operating system for L&D in the age of AI? — from linkedin.com by Dr. Philippa Hardman

Also from Philippa, see:



Your New ChatGPT Guide — from wondertools.substack.com by Jeremy Caplan and The PyCoach
25 AI Tips & Tricks from a guest expert

  • ChatGPT can make you more productive or dumber. An MIT study found that while AI can significantly boost productivity, it may also weaken your critical thinking. Use it as an assistant, not a substitute for your brain.
  • If you’re a student, use study mode in ChatGPT, Gemini, or Claude. When this feature is enabled, the chatbots will guide you through problems rather than just giving full answers, so you’ll be doing the critical thinking.
  • ChatGPT and other chatbots can confidently make stuff up (aka AI hallucinations). If you suspect something isn’t right, double-check its answers.
  • NotebookLM hallucinates less than most AI tools, but it requires you to upload sources (PDFs, audio, video) and won’t answer questions beyond those materials. That said, it’s great for students and anyone with materials to upload.
  • Probably the most underrated AI feature is deep research. It automates web searching for you and returns a fully cited report with minimal hallucinations in five to 30 minutes. It’s available in ChatGPT, Perplexity, and Gemini, so give it a try.

 


 

 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

Chegg CEO steps down amid major AI-driven restructure — from linkedin.com by Megan McDonough

Edtech firm Chegg confirmed Monday it is reducing its workforce by 45%, or 388 employees globally, and its chief executive officer is stepping down. Current CEO Nathan Schultz will be replaced effective immediately by executive chairman (and former CEO) Dan Rosensweig. The rise of AI-powered tools has dealt a massive blow to the online homework helper and led to “substantial” declines in revenue and traffic. Company shares have slipped over 10% this year. Chegg recently explored a possible sale, but ultimately decided to keep the company intact.

 

Entrepreneurship: The New Core Curriculum — from gettingsmart.com by Tom Vander Ark

Key Points

  • Entrepreneurship education fosters resilience, creativity, and financial literacy—skills critical for success in an unpredictable, tech-driven world.
  • Programs like NFTE, Junior Achievement, and Uncharted Learning empower students by offering real-world entrepreneurial experiences and mentorship.

“Entrepreneurship is the job of the future.”

— Charles Fadel, Education for the Age of AI

This shift requires a radical re-evaluation of what we teach. Education leaders across the country are realizing that the most valuable skill we can impart is not accounting or marketing, but the entrepreneurial mindset. This mindset—built on resilience, creative problem-solving, comfort with ambiguity, and the ability to pivot—is essential in startups, as an intrapreuer in big organizations, or as a citizen working for the common good.

 

Digest #182: How To Increase (Self-)Motivation — from lifehack.org by Carolina Kuepper-Tetzel

No matter whether you are a student or a teacher, sometimes it can be difficult to find motivation to start or complete a task. Instead, you may spend hours procrastinating with other activities and that opens an unhelpful cycle of stress and unhappiness. Stressful environments which are common in educational settings can increase the likelihood of maladaptive procrastination (1) and hamper motivation. This digest offers four resources on ways to think about and boost (self-)motivation.

Also see:

 

There is no God Tier video model — from downes.ca by Stephen Downes

From DSC:
Stephen has some solid reflections and asks some excellent questions in this posting, including:

The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?


A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido
Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.

The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.


Claude for Life Sciences — from anthropic.com

Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.

We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.

Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.

To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.


AI as an access tool for neurodiverse and international staff — from timeshighereducation.com by Vanessa Mar-Molinero
Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers

Where AI helps without cutting academic corners
When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:

  • Drafting and tone calibration:
  • Language scaffolding:
  • Structure and templates: ..
  • Summarise and prioritise:
  • Accessibility by default:
  • Idea generation for pedagogy:
  • Translation and cultural mediation:

Beyond learning design: supporting pedagogical innovation in response to AI — from timeshighereducation.com by Charlotte von Essen
To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning


The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
A keynote at the AI and the Liberal Arts Symposium Conference

This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.

It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.

It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).

 

The above posting on LinkedIn then links to this document


Designing Microsoft 365 Copilot to empower educators, students, and staff — from microsoft.com by Deirdre Quarnstrom

While over 80% of respondents in the 2025 AI in Education Report have already used AI for school, we believe there are significant opportunities to design AI that can better serve each of their needs and broaden access to the latest innovation.1

That’s why today [10/15/25], we’re announcing AI-powered experiences built for teaching and learning at no additional cost, new integrations in Microsoft 365 apps and Learning Management Systems, and an academic offering for Microsoft 365 Copilot.

Introducing AI-powered teaching and learning
Empowering educators with Teach

We’re introducing Teach to help streamline class prep and adapt AI to support educators’ teaching expertise with intuitive and customizable features. In one place, educators can easily access AI-powered teaching tools to create lesson plans, draft materials like quizzes and rubrics, and quickly make modifications to language, reading level, length, difficulty, alignment to relevant standards, and more.

 

 

10 Tips from Smart Teaching Stronger Learning — from Pooja K. Agarwal, Ph.D.

Per Dr. Pooja Agarwal:

Combining two strategies—spacing and retrieval practice—is key to success in learning, says Shana Carpenter.


On a somewhat related note (i.e., for Instructional Designers, teachers, faculty members, T&L staff members), also see:

 

“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

7 Teaching Practices that Nurture Student Voice — from cultofpedagogy.com by Jennifer Gonzalez

In our efforts to improve school, especially in the United States, student voice has really gotten lost. We focus on test scores, top-down curriculum, and measures of success that never quite get to the humanity of our students. Not only have these efforts not succeeded in raising test scores (Schwartz, 2025), they haven’t given us much satisfaction in other ways, either: In a recent survey, nearly half of educators reported that student behavior was worse than before the pandemic, and that number had grown since teachers were surveyed just two years earlier (Stephens, 2025).

Although there are most certainly individual schools where great things are happening, too many schools are still missing the mark. Too many schools keep trying to address these problems without hearing from the very people who are impacted most: the students. 

But there is another way. Four years ago, I started talking a lot about a new book I’d read called Street Data


Learners need: More voice. More choice. More control. -- this image was created by Daniel Christian

 

20+ Kid Tools for Better Screen Time  — from wondertools.substack.com by Jeremy Caplan and Kevin Maguire
Dad-tested apps to spark creativity (mostly free)

I had a fruitful recent conversation about resources for kids with a fellow dad, Kevin Maguire, who writes the great newsletter The New Fatherhood. If you’re a dad looking for great reads and a sense of community, check out Kevin’s newsletter(Also read Recalculating, by Ignacio Pereyra). Kevin wrote the section below about simplifying screens and shared the tip about muted.io.

The rest of the apps and resources below are ones I’ve enjoyed in recent years with my wife and daughters. From coding with visual blocks to identifying plants on nature walks, these are some of our favorite tools for sparking creativity.


Joy of Missing Out (JOMO) — including a Family section

Take FOMO and flip it on its head. That’s JOMO – the Joy of Missing Out.

At JOMO(campus), we believe digital wellness isn’t just a curriculum—it’s a culture. One rooted in joy, human connection, and intentional living. We equip schools to lead with clarity, care, and courage—helping every member of your community ask: “Who am I becoming in the digital age?”

Our mission is to help school communities create a flourishing campus culture where students are happier, healthier, and more focused — empowering them to make the impact they were born to make.

Our mission is to make digital well-being accessible for every student, fostering resilience and the skills to thrive in a world where digital pressures are ever-present. By teaching digital self-awareness and cultivating joy, we’re committed to supporting students in navigating technology’s challenges with confidence and intentionality.

 

U.S. Law Schools Make AI Training Mandatory as Technology Becomes Core Legal Skill — from jdjournal.com by Fatima E

A growing number of U.S. law schools are now requiring students to train in artificial intelligence, marking a shift from optional electives to essential curriculum components. What was once treated as a “nice-to-have” skill is fast becoming integral as the legal profession adapts to the realities of AI tools.

From Experimentation to Obligation
Until recently, most law schools relegated AI instruction to upper-level electives or let individual professors decide whether to incorporate generative AI into their teaching. Now, however, at least eight law schools require incoming students—especially in their first year—to undergo training in AI, either during orientation, in legal research and writing classes, or via mandatory standalone courses.

Some of the institutions pioneering the shift include Fordham University, Arizona State University, Stetson University, Suffolk University, Washington University in St. Louis, Case Western, and the University of San Francisco.


Beyond the Classroom & LMS: How AI Coaching is Transforming Corporate Learning — from by Dr Philippa Hardman
What a new HBR study tells about the changing nature of workplace L&D

There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?

This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.

The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work. The results were compelling….

TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.


The 10 Most Popular AI Chatbots For Educators — from techlearning.com by Erik Ofgang
Educators don’t need to use each of these chatbots, but it pays to be generally aware of the most popular AI tools

I’ve spent time testing many of these AI chatbots for potential uses and abuses in my own classes, so here’s a quick look at each of the top 10 most popular AI chatbots, and what educators should know about each. If you’re looking for more detail on a specific chatbot, click the link, as either I or other Tech & Learning writers have done deeper dives on all these tools.


…which links to:

Beyond Tool or Threat: GenAI and the Challenge It Poses to Higher Education — from er.educause.edu by Adam Maksl, Anne Leftwich, Justin Hodgson and Kevin Jones

Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.

As experts in educational technology, digital literacy, and organizational change, we argue that higher education must seize this moment to rethink not just how we use AI, but how we structure and deliver learning altogether.


At This Rural Microschool, Students Will Study With AI and Run an Airbnb — from edsurge.com by Daniel Mollenkamp

Over the past decade, microschools — experimental small schools that often have mixed-age classrooms — have expanded.

Some superintendents have touted the promise of microschools as a means for public schools to better serve their communities’ needs while still keeping children enrolled in the district. But under a federal administration that’s trying to dismantle public education and boost homeschool options, others have critiqued poor oversight and a lack of information for assessing these models.

Microschools offer a potential avenue to bring innovative, modern experiences to rural areas, argues Keith Parker, superintendent of Elizabeth City-Pasquotank Public Schools.



Are We Ready for the AI University? An AI in Higher Education Webinar with Dr. Scott Latham


Imagining Teaching with AI Agents… — from michellekassorla.substack.com by Michelle Kassorla
Teaching with AI is only one step toward educational change, what’s next?

More than two years ago I started teaching with AI in my classes. At first I taught against AI, then I taught with AI, and now I am moving into unknown territory: agents. I played with Manus and n8n and some other agents, but I really never got excited about them. They seemed more trouble than they were worth. It seemed they were no more than an AI taskbot overseeing some other AI bots, and that they weren’t truly collaborating. Now, I’m looking at Perplexity’s Comet browser and their AI agent and I’m starting to get ideas for what the future of education might hold.

I have written several times about the dangers of AI agents and how they fundamentally challenge our systems, especially online education. I know there is no way that we can effectively stop them–maybe slow them a little, but definitely not stop them. I am already seeing calls to block and ban agents–just like I saw (and still see) calls to block and ban AI–but the truth is they are the future of work and, therefore, the future of education.

So, yes! This is my next challenge: teaching with AI agents. I want to explore this idea, and as I started thinking about it, I got more and more excited. But let me back up a bit. What is an agent and how is it different than Generative AI or a bot?

 

K-12 to Career — from the-job.beehiiv.com by Paul Fain
Ohio eases eligibility rules for high school students to pursue college-level coursework in high-demand fields.

Three Ohio community colleges offer free industry-recognized credentials in manufacturing to more high school students. Also, new career-connected AP courses designed with industry input, a partnership on skilled trade prep for K-12 students, and essays on the race to define the future of credentials and how data and research can inform Workforce Pell.

 
 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 
© 2025 | Daniel Christian