I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.
A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.
What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity.
.
From DSC: Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.”
While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living.
And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it:
Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.
Those stories are just beginning…they’re not close to being over.
So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?
…
Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.
The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.
But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.
My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.
Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.
With this in mind, I put together 10 use cases for Atlas for you to try for yourself.
…
6. Retrieval Practice
What: Pulling information from memory drives retention better than re-reading. Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017). Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.” Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.
From DSC: A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.
While over 80% of respondents in the 2025 AI in Education Report have already used AI for school, we believe there are significant opportunities to design AI that can better serve each of their needs and broaden access to the latest innovation.1
That’s why today [10/15/25], we’re announcing AI-powered experiences built for teaching and learning at no additional cost, new integrations in Microsoft 365 apps and Learning Management Systems, and an academic offering for Microsoft 365 Copilot.
Introducing AI-powered teaching and learning Empowering educators with Teach
We’re introducing Teach to help streamline class prep and adapt AI to support educators’ teaching expertise with intuitive and customizable features. In one place, educators can easily access AI-powered teaching tools to create lesson plans, draft materials like quizzes and rubrics, and quickly make modifications to language, reading level, length, difficulty, alignment to relevant standards, and more.
From 70/20/10 to 90/10— from drphilippahardman.substack.com by Dr Philippa Hardman A new L&D operating system for the AI Era?
This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.
90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.
Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:
Delegate (the dead skills): Tasks that can be offloaded to AI.
Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.
So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.
My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.
Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.
How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects
Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.
This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.
4. Turn AI into a Socratic partner Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.
Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements
Rethinking traditional formats
1. From essay to exploration When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.
Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.
What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt
However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.
The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”
There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”
As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.
Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.
Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.
AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.
Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.
As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.
In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.
AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.
For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.
ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?
This week, an important piece of researchlanded that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.
The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).
If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.
Welcome back to school! For most of us (myself included), the whirlwind of lesson prep, meetings, professional development—and of course, teaching—is here. Keep reading for my favorite back-to-school activities to engage students with retrieval practice during the first week of class.
…
It may (or may not) surprise you to know that my first day of class is full of retrieval practice. Even if you haven’t introduced content yet, use retrieval practice the first day or week of class. Here’s how, with quick activities you can adapt for K–12 students, higher ed courses, and all content areas:
Build a welcoming class culture with mnemonics to remember student names (spoiler alert: trying to memorize names with students’ LMS photos doesn’t work)
I’ll start, as we academics so love to do, with a little bit of theory — specifically, four core principles that can help shape your planning for the first day of your course.
Next, I’ll cover the logistics of a successful first day, including managing the space and technology as well as getting to know your students.
To show you how to put the principles and the logistics into practice, I will provide examples of what a good set of first-day activities might look like in four disciplines.
I’ll finish with some suggestions for how to support the good work you have done on the first day with some follow-up activities.
7 Pieces of Advice for New Teachers — from edutopia.org by Brienne May Focus on relationships with students and colleagues to make a good start to the year—and remember to ask for what you need.
Too often, teacher preparation programs are rich in theory but light on practical guidance. After working hard through my undergraduate classes, completing student teaching, and spending countless hours laminating and cutting, I still found myself on the first day of school, standing in front of a room full of expectant faces with eager eyes, and realized I had no idea what to do next. I didn’t know what to say to students in that moment, let alone how to survive the following 180 days. Twelve years later, I have collected a trove of advice I wish I could have shared with that fresh-faced teacher.
Each fall, one of the first routines I introduce is our classroom job board. It’s more than a list of tasks—it helps students feel that they belong and have real roles in our shared space. Over the years, I’ve expanded beyond classic jobs like Line Leader and Pencil Sharpener to include creative roles with quirky titles that engage and resonate with students.
Here are the jobs that have helped my students feel seen, trusted, and excited to contribute.
Guiding Students to Overcome Learned Helplessness— from edutopia.org by Michelle Singh New teachers can create an environment where students feel supported and understand that mistakes are part of the learning process.
Creating a Kid-Led Hall of Fame for Books — from edutopia.org by Eric Hall Allowing elementary students to nominate and vote for their favorite books of the year can create a culture of celebration in the classroom.
When I started teaching, I remembered that conversation with my elementary school librarian. I thought, “Why should adults have all the fun?” I wanted my students to experience the excitement of recognizing books they thought were the best. And just like that, the Hallbery Awards were born and continued twice a year for over 15 years. (Why Hallbery? Because my last name is Hall.)
Today, we’re taking a look at the three primary forms of assessments—diagnostic, formative, and summative—with the goal of not only differentiating between them but also better understanding the purpose and potential power of each.
At their core, each of the three primary assessment types serves a distinct purpose. Diagnostic assessments are used before instruction to help identify where students are in their comprehension of academic content. Formative assessments are used while content is being taught to understand what students are picking up, to guide their learning, and to help teachers determine what to focus on moving forward. Summative assessments are used after instruction to evaluate the outcomes of student learning: what, or how much, they ultimately learned.
The “New Indiana Diploma” — which was signed into law in April and goes into effect for all incoming first-year students this academic year — gives students the option to earn different “seals” in addition to a basic diploma, depending on whether they plan to attend college, go straight to work or serve in the military. Jenner describes it as an effort to tailor the diploma to students’ interests, expose students to careers and recognize different forms of student achievement.
Students in one Arizona district will take fewer standardized tests this school year, the result of an educator-led push to devote less time to testing.
The Tucson Education Association, backed by the school board and several parents, reached an agreement with the Tucson Unified school system in May to reduce the number of district-mandated standardized assessments students take annually starting in the 2025-26 academic year.
Just 25 percent of educators agreed that state-mandated tests provide useful information for the teachers in their school, according to a 2023 EdWeek Research Center survey of teachers, principals, and district leaders.
30 Ways to Bring Calm to a Noisy High School Classroom — from edutopia.org by Anne Noyes Saini From ‘finding the lull’ to the magic of a dramatic whisper, these teacher-tested strategies quickly get high school students focused and back on track.
Approaching Experiential Learning as a Continuum — from edutopia.org by Bill Manchester Teachers can consider 12 characteristics of experiential learning to make lessons more or less active for students.
AI and Higher Ed: An Impending Collapse — from insidehighered.com by Robert Niebuhr; via George Siemens; I also think George’s excerpt (see below) gets right to the point. Universities’ rush to embrace AI will lead to an untenable outcome, Robert Niebuhr writes.
Herein lies the trap. If students learn how to use AI to complete assignments and faculty use AI to design courses, assignments, and grade student work, then what is the value of higher education? How long until people dismiss the degree as an absurdly overpriced piece of paper? How long until that trickles down and influences our economic and cultural output? Simply put, can we afford a scenario where students pretend to learn and we pretend to teach them?
This next report doesn’t look too good for traditional institutions of higher education either:
For the first time in modern history, a bachelor’s degree is no longer a reliable path to professional employment. Recent graduates face rising unemployment and widespread underemployment as structural—not cyclical—forces reshape entry?level work. This new report identifies four interlocking drivers: an AI?powered “Expertise Upheaval” eliminating many junior tasks, a post?pandemic shift to lean staffing and risk?averse hiring, AI acting as an accelerant to these changes, and a growing graduate glut. As a result, young degree holders are uniquely seeing their prospects deteriorate – even as the rest of the economy remain robust. Read the full report to explore the data behind these trends.
The disconnection between legal education and the real world starkly contrasted with what he expected law school to be. “I thought rather naively…this would be a really interesting experience…linked to lawyers and what lawyers are doing in society…Far from it. It was solidly academic, so uninteresting, and I thought it’s got to be better than this.”
These frustrations inspired his work on simulation-based education, which seeks to produce “client-ready” lawyers and professionals who reflect deeply on their future roles. Maharg recently worked as a consultant with Osgoode Professional Development at Osgoode Hall Law School to design a platform that eschews many of the assumptions about legal education to deliver practical skills with real-world scenarios.
Osgoode’s SIMPLE platform – short for “simulated professional learning environment” – integrates case management systems and simulation engines to immerse students in practical scenarios.
“It’s actually to get them thinking hard about what they do when they act as lawyers and what they will do when they become lawyers…putting it into values and an ethical framework, as well as making it highly intensively practical,” Maharg says.
AI is rapidly transforming legal practice. Today, tools handle document review and legal research at a pace unimaginable just a few years ago. As recent Canadian Lawyer reporting shows, legal AI adoption is outpacing expectations, especially among in-house teams, and is fundamentally reshaping how legal services are delivered.
Crucially, though, AI should not replace associates. Instead, it should relieve them of repetitive tasks and allow them to focus on developing judgment, client management, and strategic thinking. As I’ve previously discussed regarding the risks of banning AI in court, the future of law depends on blending technological fluency with the human skills clients value most.
The term autonomous agents should raise some concern. I believe semi-autonomous agents is a better term. Do we really want fully autonomous agents that learn and interact independently, to find ways to accomplish tasks?
We live in a world full of cybersecurity risks. Bad actors will think of ways to use agents. Even well-intentioned systems could mishandle a task without proper guardrails.
Legal professionals will want to thoughtfully equip their agent technology with controlled access to the right services. Agents must be supervised, and training must be required for those using or benefiting from agents. Legal professionals will also want to expand the scope of AI Governance to include the oversight of agents.
… Agentic AI will require supervision. Human review of Generative AI output is essential. Stating the obvious may be necessary, especially with agents. Controls, human review, and human monitoring must be part of the design and the requirements for any project. Leadership should not leave this to the IT department alone.
15 Quick (and Mighty) Retrieval Practices — from edutopia.org by Daniel Leonard From concept maps to flash cards to Pictionary, these activities help students reflect on—and remember—what they’ve learned.
But to genuinely commit information to long-term memory, there’s no replacement for active retrieval—the effortful practice of recalling information from memory, unaided by external sources like notes or the textbook. “Studying this way is mentally difficult,” Willingham acknowledged, “but it’s really, really good for memory.”
…
From low-stakes quizzes to review games to flash cards, there are a variety of effective retrieval practices that teachers can implement in class or recommend that students try at home. Drawing from a wide range of research, we compiled this list of 15 actionable retrieval practices.
When Zach Groshell zoomed in as a guest on a longstanding British education podcast last March, a co-host began the interview by telling listeners he was “very well-known over in the U.S.”
Groshell, a former Seattle-area fourth-grade teacher, had to laugh: “Nobody knows me here in the U.S.,” he said in an interview.
But in Britain, lots of teachers know his name. An in-demand speaker at education conferences, he flies to London “as frequently as I can” to discuss Just Tell Them, his 2024 book on explicit instruction. Over the past year, Groshell has appeared virtually about once a month and has made two personal appearances at events across England.
The reason? A discipline known as cognitive science. Born in the U.S., it relies on decades of research on how kids learn to guide teachers in the classroom, and is at the root of several effective reforms, including the Science of Reading.
Five Essential Skills Kids Need (More than Coding)
I’m not saying we shouldn’t teach kids to code. It’s a useful skill. But these are the five true foundations that will serve them regardless of how technology evolves.
Day of AI Australia hosted a panel discussion on 20 May, 2025. Hosted by Dr Sebastian Sequoiah-Grayson (Senior Lecturer in the School of Computer Science and Engineering, UNSW Sydney) with panel members Katie Ford (Industry Executive – Higher Education at Microsoft), Tamara Templeton (Primary School Teacher, Townsville), Sarina Wilson (Teaching and Learning Coordinator – Emerging Technology at NSW Department of Education) and Professor Didar Zowghi (Senior Principal Research Scientist at CSIRO’s Data61).
As many students face criticism and punishment for using artificial intelligence tools like ChatGPT for assignments, new reporting shows that many instructors are increasingly using those same programs.
Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.
With the right approach, a transcript becomes something else:
A window into student decision-making
A record of how understanding evolves
A conversation that can be interpreted and assessed
An opportunity to evaluate content understanding
This week, I’m excited to share something that brings that idea into practice.
Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.
This Field Guide is the first move in that direction.
‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator
Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.
This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.
In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn
The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.
Make Instructional Design Fun Again with AI Agents— from drphilippahardman.substack.com by Dr. Philippa Hardman A special edition practical guide to selecting & building AI agents for instructional design and L&D
Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”
In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.
After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.
… Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself:“Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.
The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.
Also re: metacognition and AI, see:
In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn
The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.
By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.
EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.
Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”
Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.
Here are the predictions, our votes, and some commentary:
“By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
“Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
“Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
“AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
“AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.
Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?
And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?
Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.
Higher education is in a period of massive transformation and uncertainty. Not only are current events impacting how institutions operate, but technological advancement—particularly in AI and virtual reality—are reshaping how students engage with content, how cognition is understood, and how learning itself is documented and valued.
Our newly released 2025 EDUCAUSE Horizon Report | Teaching and Learning Edition captures the spirit of this transformation and how you can respond with confidence through the lens of emerging trends, key technologies and practices, and scenario-based foresight.
DC: THIS could unfortunately be the ROI companies will get from large investments in #AI — reduced headcount/employees/contract workers. https://t.co/zEWlqCSWzI
Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.
According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”
Something strange, and potentially alarming, is happening to the job market for young, educated workers.
According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.
What’s going on? I see three plausible explanations, and each might be a little bit true.
The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.
Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.
Key takeaway: Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be.Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.
…
A.4 How will AI disrupt this segment? Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.
No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”
Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?
But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.
…
In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.
This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”
I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!
DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.
What’s impressive about it?
Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.
The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.
From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25
All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.
That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.
… Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”
FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.
Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.
…
What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.
…
Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.
…
In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.
Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom.
From DSC: I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.
Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.
There are a few places where Scott and I differ.
The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
— Satya Nitta, a longtime computer researcher at
IBM’s Watson Research Center in Yorktown Heights, NY .
By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.
Also, I would use the term learning preferences where Scott uses the term learning styles.
Scott also mentions:
“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”
It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).
That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.
So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!
Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.
The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.
The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.
The urgent task facing those of us who teach and advise students, whether they be degree program or certificate seeking, is to ensure that they are prepared to enter (or re-enter) the workplace with skills and knowledge that are relevant to 2025 and beyond. One of the first skills to cultivate is an understanding of what kinds of services this emerging technology can provide to enhance the worker’s productivity and value to the institution or corporation.
…
Given that short period of time, coupled with the need to cover the scheduled information in the syllabus, I recommend that we consider merging AI use into authentic assignments and assessments, supplementary modules, and other resources to prepare for AI.
Learning Design in the Era of Agentic AI— from drphilippahardman.substack.com by Dr Philippa Hardman Aka, how to design online async learning experiences that learners can’t afford to delegate to AI agents
The point I put forward was that the problem is not AI’s ability to complete online async courses, but that online async courses courses deliver so little value to our learners that they delegate their completion to AI.
The harsh reality is that this is not an AI problem — it is a learning design problem.
However, this realisation presents us with an opportunity which we overall seem keen to embrace. Rather than seeking out ways to block AI agents, we seem largely to agree that we should use this as a moment to reimagine online async learning itself.
While fears of AI replacing educators swirl in the public consciousness, a cohort of pioneering institutions is demonstrating a far more nuanced reality. These eight universities and schools aren’t just experimenting with AI, they’re fundamentally reshaping their educational ecosystems. From personalized learning in K-12 to advanced research in higher education, these institutions are leveraging Google’s AI to empower students, enhance teaching, and streamline operations.
Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan My favorite tactics for making the most of AI — a podcast conversation
AI tools I consistently rely on (areas covered mentioned below)
Research and analysis
Communication efficiency
Multimedia creation
AI tactics that work surprisingly well
1. Reverse interviews Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”
This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.