AI Is Quietly Rewiring the ADDIE Model (In a Good Way) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The traditional ADDIE workflow isn’t dead, but it is evolving

The real story isn’t what AI can produce — it’s how it changes the decisions we make at every stage of instructional design.

After working with thousands of instructional designers on my bootcamp, I’ve learned something counterintuitive: the best teams aren’t the ones with the fanciest AI tools — they’re the ones who know when to use which mode—and when to use none at all.

Once you recognise that, you start to see instructional design differently — not as a linear process, but as a series of decision loops where AI plays distinct roles.

In this post, I show you the 3 modes of AI that actually matter in instructional design — and map them across every phase of ADDIE so you know exactly when to let AI run, and when to slow down and think.


Also see:

Generative AI for Course Design: Writing Effective Prompts for Multiple Choice Question Development — from onlineteaching.umich.edu by Hedieh Najafi

In higher education, developing strong multiple-choice questions can be a time-intensive part of the course design process. Developing such items requires subject-matter expertise and assessment literacy, and for faculty and designers who are creating and producing online courses, it can be difficult to find the capacity to craft quality multiple-choice questions.

At the University of Michigan Center for Academic Innovation, learning experience designers are using generative artificial intelligence to streamline the multiple-choice question development process and help ameliorate this issue. In this article, I summarize one of our projects that explored effective prompting strategies to develop multiple-choice questions with ChatGPT for our open course portfolio. We examined how structured prompting can improve the quality of AI-generated assessments, producing relevant comprehension and recall items and options that include plausible distractors.

Achieving this goal enables us to develop several ungraded practice opportunities, preparing learners for their graded assessments while also freeing up more time for course instructors and designers.

 

Major Changes Reshape Law Schools Nationwide in 2026 — from jdjournal.com by Ma Fatima

Law schools across the United States are entering one of the most transformative periods in recent memory. In 2026, legal education is being reshaped by leadership turnover, shifting accreditation standards, changes to student loan policies, and the introduction of a redesigned bar exam. Together, these developments are forcing law schools to rethink how they educate students and prepare future lawyers for a rapidly evolving legal profession.

Also from jdjournal.com, see:

  • Healthcare Industry Legal Careers: High-Growth Roles and Paths — from jdjournal.com by Ma Fatima
    The healthcare industry is rapidly emerging as one of the most promising and resilient sectors for legal professionals, driven by expanding regulations, technological innovation, and an increasingly complex healthcare delivery system. As hospitals, life sciences companies, insurers, and digital health platforms navigate constant regulatory change, demand for experienced legal talent continues to rise.
 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

The above posting on LinkedIn then links to this document


Designing Microsoft 365 Copilot to empower educators, students, and staff — from microsoft.com by Deirdre Quarnstrom

While over 80% of respondents in the 2025 AI in Education Report have already used AI for school, we believe there are significant opportunities to design AI that can better serve each of their needs and broaden access to the latest innovation.1

That’s why today [10/15/25], we’re announcing AI-powered experiences built for teaching and learning at no additional cost, new integrations in Microsoft 365 apps and Learning Management Systems, and an academic offering for Microsoft 365 Copilot.

Introducing AI-powered teaching and learning
Empowering educators with Teach

We’re introducing Teach to help streamline class prep and adapt AI to support educators’ teaching expertise with intuitive and customizable features. In one place, educators can easily access AI-powered teaching tools to create lesson plans, draft materials like quizzes and rubrics, and quickly make modifications to language, reading level, length, difficulty, alignment to relevant standards, and more.

 

 

“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 

Here are my favorite back-to-school activities to strengthen learning — from retrievalpractice.org by Pooja K. Agarwal, Ph.D.

Welcome back to school! For most of us (myself included), the whirlwind of lesson prep, meetings, professional development—and of course, teaching—is here. Keep reading for my favorite back-to-school activities to engage students with retrieval practice during the first week of class.

It may (or may not) surprise you to know that my first day of class is full of retrieval practice. Even if you haven’t introduced content yet, use retrieval practice the first day or week of class. Here’s how, with quick activities you can adapt for K–12 students, higher ed courses, and all content areas:


How to Teach a Good First Day of Class — by James Lang; via Dr. Pooja Agarwal’s posting above

What you can expect to find here:

  • I’ll start, as we academics so love to do, with a little bit of theory — specifically, four core principles that can help shape your planning for the first day of your course.
  • Next, I’ll cover the logistics of a successful first day, including managing the space and technology as well as getting to know your students.
  • To show you how to put the principles and the logistics into practice, I will provide examples of what a good set of first-day activities might look like in four disciplines.
  • I’ll finish with some suggestions for how to support the good work you have done on the first day with some follow-up activities.

7 Pieces of Advice for New Teachers — from edutopia.org by Brienne May
Focus on relationships with students and colleagues to make a good start to the year—and remember to ask for what you need.

Too often, teacher preparation programs are rich in theory but light on practical guidance. After working hard through my undergraduate classes, completing student teaching, and spending countless hours laminating and cutting, I still found myself on the first day of school, standing in front of a room full of expectant faces with eager eyes, and realized I had no idea what to do next. I didn’t know what to say to students in that moment, let alone how to survive the following 180 days. Twelve years later, I have collected a trove of advice I wish I could have shared with that fresh-faced teacher.


The Transient Information Effect: Why Great Explanations Don’t Always Stick — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
In this post, Dr. John Sweller describes how the Transient Information Effect can overload student working memory and what teachers can do about it.

Highlights:

  • The Transient Information Effect happens when important information disappears before learners can process and remember it.
  • Dr. John Sweller, who first studied the Transient Information Effect, answers our questions about this overlooked learning challenge.
  • Turning transient information into something students can revisit (like writing key steps on the board) can help explanations stick.

41 Elementary Classroom Jobs to Build Shared Responsibility and Community — from edutopia.org by Donna Paul
Classroom jobs help students feel seen, trusted, and excited to contribute to their classroom community.

Each fall, one of the first routines I introduce is our classroom job board. It’s more than a list of tasks—it helps students feel that they belong and have real roles in our shared space. Over the years, I’ve expanded beyond classic jobs like Line Leader and Pencil Sharpener to include creative roles with quirky titles that engage and resonate with students.

Here are the jobs that have helped my students feel seen, trusted, and excited to contribute.


Guiding Students to Overcome Learned Helplessness — from edutopia.org by Michelle Singh
New teachers can create an environment where students feel supported and understand that mistakes are part of the learning process.


Creating a Kid-Led Hall of Fame for Books — from edutopia.org by Eric Hall
Allowing elementary students to nominate and vote for their favorite books of the year can create a culture of celebration in the classroom.

When I started teaching, I remembered that conversation with my elementary school librarian. I thought, “Why should adults have all the fun?” I wanted my students to experience the excitement of recognizing books they thought were the best. And just like that, the Hallbery Awards were born and continued twice a year for over 15 years. (Why Hallbery? Because my last name is Hall.)


Understanding Diagnostic, Formative, and Summative Assessments — from edmentum.com

Today, we’re taking a look at the three primary forms of assessments—diagnostic, formative, and summative—with the goal of not only differentiating between them but also better understanding the purpose and potential power of each.

At their core, each of the three primary assessment types serves a distinct purpose. Diagnostic assessments are used before instruction to help identify where students are in their comprehension of academic content. Formative assessments are used while content is being taught to understand what students are picking up, to guide their learning, and to help teachers determine what to focus on moving forward. Summative assessments are used after instruction to evaluate the outcomes of student learning: what, or how much, they ultimately learned.


How one state revamped high school to reflect reality: Not everyone goes to college — from hechingerreport.org by Kavitha Cardoza
Indiana’s initial plan for revised graduation requirements was criticized for prioritizing workforce skills over academic preparedness. The state has tried to find a balance between the two

This story is part of Hechinger’s ongoing coverage about rethinking high school. Read about high school apprenticeships in Indiana, a new diploma in Alabama that trades chemistry for carpentry, and “career education for all” in Kentucky.

The “New Indiana Diploma” — which was signed into law in April and goes into effect for all incoming first-year students this academic year — gives students the option to earn different “seals” in addition to a basic diploma, depending on whether they plan to attend college, go straight to work or serve in the military. Jenner describes it as an effort to tailor the diploma to students’ interests, expose students to careers and recognize different forms of student achievement.


How Teachers in This District Pushed to Have Students Spend Less Time Testing — from edweek.org by Elizabeth Heubeck

Students in one Arizona district will take fewer standardized tests this school year, the result of an educator-led push to devote less time to testing.

The Tucson Education Association, backed by the school board and several parents, reached an agreement with the Tucson Unified school system in May to reduce the number of district-mandated standardized assessments students take annually starting in the 2025-26 academic year.

Just 25 percent of educators agreed that state-mandated tests provide useful information for the teachers in their school, according to a 2023 EdWeek Research Center survey of teachers, principals, and district leaders. 


30 Ways to Bring Calm to a Noisy High School Classroom — from edutopia.org by Anne Noyes Saini
From ‘finding the lull’ to the magic of a dramatic whisper, these teacher-tested strategies quickly get high school students focused and back on track.


Approaching Experiential Learning as a Continuum — from edutopia.org by Bill Manchester
Teachers can consider 12 characteristics of experiential learning to make lessons more or less active for students.


 

AI and Higher Ed: An Impending Collapse — from insidehighered.com by Robert Niebuhr; via George Siemens; I also think George’s excerpt (see below) gets right to the point.
Universities’ rush to embrace AI will lead to an untenable outcome, Robert Niebuhr writes.

Herein lies the trap. If students learn how to use AI to complete assignments and faculty use AI to design courses, assignments, and grade student work, then what is the value of higher education? How long until people dismiss the degree as an absurdly overpriced piece of paper? How long until that trickles down and influences our economic and cultural output? Simply put, can we afford a scenario where students pretend to learn and we pretend to teach them?


This next report doesn’t look too good for traditional institutions of higher education either:


No Country for Young Grads — from burningglassinstitute.org

For the first time in modern history, a bachelor’s degree is no longer a reliable path to professional employment. Recent graduates face rising unemployment and widespread underemployment as structural—not cyclical—forces reshape entry?level work. This new report identifies four interlocking drivers: an AI?powered “Expertise Upheaval” eliminating many junior tasks, a post?pandemic shift to lean staffing and risk?averse hiring, AI acting as an accelerant to these changes, and a growing graduate glut. As a result, young degree holders are uniquely seeing their prospects deteriorate – even as the rest of the economy remain robust. Read the full report to explore the data behind these trends.

The above article was via Brandon Busteed on LinkedIn:

 

Osgoode’s new simulation-based learning tool aims to merge ethical and practical legal skills — from canadianlawyermag.com by Tim Wilbur
The designer speaks about his vision for redefining legal education through an innovative platform

The disconnection between legal education and the real world starkly contrasted with what he expected law school to be. “I thought rather naively…this would be a really interesting experience…linked to lawyers and what lawyers are doing in society…Far from it. It was solidly academic, so uninteresting, and I thought it’s got to be better than this.”

These frustrations inspired his work on simulation-based education, which seeks to produce “client-ready” lawyers and professionals who reflect deeply on their future roles. Maharg recently worked as a consultant with Osgoode Professional Development at Osgoode Hall Law School to design a platform that eschews many of the assumptions about legal education to deliver practical skills with real-world scenarios.

Osgoode’s SIMPLE platform – short for “simulated professional learning environment” – integrates case management systems and simulation engines to immerse students in practical scenarios.

“It’s actually to get them thinking hard about what they do when they act as lawyers and what they will do when they become lawyers…putting it into values and an ethical framework, as well as making it highly intensively practical,” Maharg says.


And speaking of legal training, also see:

AI in law firms should be a training tool, not a threat, for young lawyers — from canadianlawyermag.com by Tim Wilbur
Tech should free associates for deeper learning, not remove them from the process

AI is rapidly transforming legal practice. Today, tools handle document review and legal research at a pace unimaginable just a few years ago. As recent Canadian Lawyer reporting shows, legal AI adoption is outpacing expectations, especially among in-house teams, and is fundamentally reshaping how legal services are delivered.

Crucially, though, AI should not replace associates. Instead, it should relieve them of repetitive tasks and allow them to focus on developing judgment, client management, and strategic thinking. As I’ve previously discussed regarding the risks of banning AI in court, the future of law depends on blending technological fluency with the human skills clients value most.


Also, the following relates to legaltech as well:

Agentic AI in Legaltech: Proceed with Supervision! — from directory.lawnext.com by Ken Crutchfield
Semi-Autonomous agents can transform work if leaders maintain oversight

The term autonomous agents should raise some concern. I believe semi-autonomous agents is a better term. Do we really want fully autonomous agents that learn and interact independently, to find ways to accomplish tasks?

We live in a world full of cybersecurity risks. Bad actors will think of ways to use agents. Even well-intentioned systems could mishandle a task without proper guardrails.

Legal professionals will want to thoughtfully equip their agent technology with controlled access to the right services. Agents must be supervised, and training must be required for those using or benefiting from agents. Legal professionals will also want to expand the scope of AI Governance to include the oversight of agents.

Agentic AI will require supervision. Human review of Generative AI output is essential. Stating the obvious may be necessary, especially with agents. Controls, human review, and human monitoring must be part of the design and the requirements for any project. Leadership should not leave this to the IT department alone.

 

15 Quick (and Mighty) Retrieval Practices — from edutopia.org by Daniel Leonard
From concept maps to flash cards to Pictionary, these activities help students reflect on—and remember—what they’ve learned.

But to genuinely commit information to long-term memory, there’s no replacement for active retrieval—the effortful practice of recalling information from memory, unaided by external sources like notes or the textbook. “Studying this way is mentally difficult,” Willingham acknowledged, “but it’s really, really good for memory.”

From low-stakes quizzes to review games to flash cards, there are a variety of effective retrieval practices that teachers can implement in class or recommend that students try at home. Drawing from a wide range of research, we compiled this list of 15 actionable retrieval practices.


And speaking of cognitive science, also see:

‘Cognitive Science,’ All the Rage in British Schools, Fails to Register in U.S. — from the74million.org by Greg Toppo
Educators blame this ‘reverse Beatles effect’ on America’s decentralized system and grad schools that are often hostile to research.

When Zach Groshell zoomed in as a guest on a longstanding British education podcast last March, a co-host began the interview by telling listeners he was “very well-known over in the U.S.”

Groshell, a former Seattle-area fourth-grade teacher, had to laugh: “Nobody knows me here in the U.S.,” he said in an interview.

But in Britain, lots of teachers know his name. An in-demand speaker at education conferences, he flies to London “as frequently as I can” to discuss Just Tell Them, his 2024 book on explicit instruction. Over the past year, Groshell has appeared virtually about once a month and has made two personal appearances at events across England.

The reason? A discipline known as cognitive science. Born in the U.S., it relies on decades of research on how kids learn to guide teachers in the classroom, and is at the root of several effective reforms, including the Science of Reading.

 

AI & Schools: 4 Ways Artificial Intelligence Can Help Students — from the74million.org by W. Ian O’Byrne
AI creates potential for more personalized learning

I am a literacy educator and researcher, and here are four ways I believe these kinds of systems can be used to help students learn.

  1. Differentiated instruction
  2. Intelligent textbooks
  3. Improved assessment
  4. Personalized learning


5 Skills Kids (and Adults) Need in an AI World — from oreilly.com by Raffi Krikorian
Hint: Coding Isn’t One of Them

Five Essential Skills Kids Need (More than Coding)
I’m not saying we shouldn’t teach kids to code. It’s a useful skill. But these are the five true foundations that will serve them regardless of how technology evolves.

  1. Loving the journey, not just the destination
  2. Being a question-asker, not just an answer-getter
  3. Trying, failing, and trying differently
  4. Seeing the whole picture
  5. Walking in others’ shoes

The AI moment is now: Are teachers and students ready? — from iblnews.org

Day of AI Australia hosted a panel discussion on 20 May, 2025. Hosted by Dr Sebastian Sequoiah-Grayson (Senior Lecturer in the School of Computer Science and Engineering, UNSW Sydney) with panel members Katie Ford (Industry Executive – Higher Education at Microsoft), Tamara Templeton (Primary School Teacher, Townsville), Sarina Wilson (Teaching and Learning Coordinator – Emerging Technology at NSW Department of Education) and Professor Didar Zowghi (Senior Principal Research Scientist at CSIRO’s Data61).


Teachers using AI tools more regularly, survey finds — from iblnews.org

As many students face criticism and punishment for using artificial intelligence tools like ChatGPT for assignments, new reporting shows that many instructors are increasingly using those same programs.


Addendum on 5/28/25:

A Museum of Real Use: The Field Guide to Effective AI Use — from mikekentz.substack.com by Mike Kentz
Six Educators Annotate Their Real AI Use—and a Method Emerges for Benchmarking the Chats

Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.

With the right approach, a transcript becomes something else:

  • A window into student decision-making
  • A record of how understanding evolves
  • A conversation that can be interpreted and assessed
  • An opportunity to evaluate content understanding

This week, I’m excited to share something that brings that idea into practice.

Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.

This Field Guide is the first move in that direction.


 

‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg
Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator

Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.

This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.


In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.


Make Instructional Design Fun Again with AI Agents — from drphilippahardman.substack.com by Dr. Philippa Hardman
A special edition practical guide to selecting & building AI agents for instructional design and L&D

Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”

In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.

Here’s a sneak peak….


AI Personality Matters: Why Claude Doesn’t Give Unsolicited Advice (And Why You Should Care) — from mikekentz.substack.com by Mike Kentz
First in a four-part series exploring the subtle yet profound differences between AI systems and their impact on human cognition

After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.

Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself: “Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.

The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.


Also re: metacognition and AI, see:

In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.

By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.


EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine
A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.

Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”

Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.


Partly Cloudy with a Chance of Chatbots — from derekbruff.org by Derek Bruff

Here are the predictions, our votes, and some commentary:

  • “By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
  • “Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
  • “Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
  • “AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
  • “AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.

‘We Have to Really Rethink the Purpose of Education’
The Ezra Klein Show

Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?

And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?

Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.


 

2025 EDUCAUSE Horizon Report | Teaching and Learning Edition — from library.educause.edu

Higher education is in a period of massive transformation and uncertainty. Not only are current events impacting how institutions operate, but technological advancement—particularly in AI and virtual reality—are reshaping how students engage with content, how cognition is understood, and how learning itself is documented and valued.

Our newly released 2025 EDUCAUSE Horizon Report | Teaching and Learning Edition captures the spirit of this transformation and how you can respond with confidence through the lens of emerging trends, key technologies and practices, and scenario-based foresight.

#teachingandlearning #highereducation #learningecosystems #learning #futurism #foresight #trends #emergingtechnologies #AI #VR #gamechangingenvironment #colleges #universities #communitycolleges #faculty #staff #IT

 

..which links to:

Duolingo will replace contract workers with AI — from theverge.com by Jay Peters
The company is going to be ‘AI-first,’ says its CEO.

Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.

According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”


Relevant links:

Something strange, and potentially alarming, is happening to the job market for young, educated workers.

According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.

What’s going on? I see three plausible explanations, and each might be a little bit true.


It’s Time To Get Concerned As More Companies Replace Workers With AI — from forbes.com by Jack Kelly

The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.

Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.


The future of career navigation — from medium.com by Sami Tatar

  1. Career navigation market overview

Key takeaway:
Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be. Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.

A.4 How will AI disrupt this segment?
Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.


Tales from the Front – What Teachers Are Telling Me at AI Workshops — from aliciabankhofer.substack.com by Alicia Bankhofer
Real conversations, real concerns: What teachers are saying about AI

“Do I really have to use AI?”

No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”

Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?

But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.

In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.


Welcome To AI Agent World! (Everything you need to know about the AI Agent market.) — from joshbersin.com by Josh Bersin

This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”

I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!


DeepSeek Unveils Prover V2 — from theaivalley.com

DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.

What’s impressive about it?

  • Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
  • Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
  • Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
  • New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.

Artificial Intelligence: Lessons Learned from a Graduate-Level Final Exam — from er.educause.edu by Craig Westling and Manish K. Mishra

The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.


Deep Research with AI: 9 Ways to Get Started — from wondertools.substack.com by Jeremy Caplan
Practical strategies for thorough, citation-rich AI research
.


From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25

All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.

 That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.

Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”


FutureHouse Platform: Superintelligent AI Agents for Scientific Discovery — from futurehouse.org by Michael Skarlinski, Tyler Nadolski, James Braza, Remo Storni, Mayk Caldas, Ludovico Mitchener, Michaela Hinks, Andrew White, &  Sam Rodriques

FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.

 
© 2025 | Daniel Christian