What the Future of Learning Looks Like in the Era of AI — from the Center for Academic Innovation at the University of Michigan, by Sean Corp

AI & the Future of Learning Summit brings industry, education leaders together to discuss higher education’s opportunity to lead, what students need, and what partnerships are possible

As artificial intelligence rapidly reshapes the nature of work and learning, speakers at the University of Michigan’s AI & the Future of Learning Summit delivered a clear message: higher education must take a leading role in defining what comes next.

One CEO of a leading educational technology company put it like this: “The only bad thing would be universities standing still.”

Universities must embrace their roles as providers of continuous, lifelong learning that evolves alongside technological change. 


This shift is already affecting early-career pathways. Employers are placing greater emphasis on experience, while traditional entry-level roles are becoming less accessible. There is often a gap between what a credential represents and the expectations of employers.

That gap is particularly evident in access to internships. Chris Parrish, co-founder and president of Podium, noted that millions of students compete for a limited number of internships each year, making it increasingly difficult to gain the experience employers demand.

“If you miss out on an internship, you’re twice as likely to be unemployed,” Parrish said. 

 

The quest to build a better AI tutor — from hechingerreport.org by Jill Barshay
Researchers make progress with an older ed tech idea: personalized practice

One promising idea has less to do with how an AI tutor explains concepts and more with what it asks students to practice next.

A team at the University of Pennsylvania, which included some AI skeptics, recently tested this approach in a study of close to 800 Taiwanese high school students learning Python programming. All the students used the same AI tutor, which was designed not to give away answers.

But there was one key difference. Half the students were randomly assigned to a fixed sequence of practice problems, progressing from easy to hard. The other half received a personalized sequence with the AI tutor continuously adjusting the difficulty of each problem based on how the student was performing and interacting with the chatbot.

The idea is based on what educators call the “zone of proximal development.” When problems are too easy, students get bored. When they’re too hard, students get frustrated. The goal is to keep students in a sweet spot: challenged, but not overwhelmed.

The researchers found that students in the personalized group did better on a final exam than students in the fixed problem group. The difference was characterized as the equivalent of 6 to 9 months of additional schooling, an eye-catching claim for an after-school online course that lasted only five months.

To address this, Chung’s team combined a large language model with a separate machine-learning algorithm that analyzes how students interact with the online course platform — how they answer the practice questions, how many times they revise or edit their coding, and the quality of their conversations with the chatbot — and uses that information to decide which problem to serve up next.

 

From DSC:
I have been proposing that the AI-based learning platform of the future will be constantly doing this — every single day. It will know what the in-demand skills are — at any given moment in time. It will then be able to direct you to resources that will help you gain those skills. Though in my vision, the system is querying actual/open job descriptions, not analyzing learning data from enterprise learners. Perhaps I should add that to the vision.


Coursera’s Job Skills Report 2026: Top skills for your students — from coursera.org

The Job Skills Report 2026 analyzes learning data from more than 6 million enterprise learners to identify the future job skills organizations need most. It’s designed for HR and L&D leaders; data, IT, and software & product development leaders; higher education administrators; and government agencies seeking actionable insights on workforce skills trends and AI-driven transformation.

Drawing on data from 6 million enterprise learners across nearly 7,000 organizations, the Job Skills Report 2026 guides you through the skills reshaping the global economy. This year’s analysis spans Data, IT, and Software & Product Development—and the Generative AI skills becoming essential for every role.

 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 
 

Jim VandeHei’s note to his kids: Blunt AI talk — from axios.com by CEO Jim VandeHei
Axios CEO Jim VandeHei wrote this note to his wife, Autumn, and their three kids. She suggested sharing it more broadly since so many families are wrestling with how to think and talk about AI. So here it is …

Dear Family:
I want to put to words what I’m hearing, seeing, thinking and writing about AI.

  • Simply put, I’m now certain it will upend your work and life in ways more profound than the internet or possibly electricity. This will hit in months, not years.
  • The changes will be fast, wide, radical, disorienting and scary. No one will avoid its reach.

I’m not trying to frighten you. And I know your opinions range from wonderment to worry. That’s natural and OK. Our species isn’t wired for change of this speed or scale.

  • My conversations with the CEOs and builders of these LLMs, as well as my own deep experimentation with AI, have shaken and stirred me in ways I never imagined.

All of you must figure out how to master AI for any specific job or internship you hold or take. You’d be jeopardizing your future careers by not figuring out how to use AI to amplify and improve your work. You’d be wise to replace social media scrolling with LLM testing.

Be the very best at using AI for your gig.

more here.


Also see:


Also relevant/see:

 

Farewell to Traditional Universities | What AI Has in Store for Education

Premiered Jan 16, 2026

Description:

What if the biggest change in education isn’t a new app… but the end of the university monopoly on credibility?

Jensen Huang has framed AI as a platform shift—an industrial revolution that turns intelligence into infrastructure. And when intelligence becomes cheap, personal, and always available, education stops being a place you go… and becomes a system that follows you. The question isn’t whether universities will disappear. The question is whether the old model—high cost, slow updates, one-size-fits-all—can survive a world where every student can have a private tutor, a lab partner, and a curriculum designer on demand.

This video explores what AI has in store for education—and why traditional universities may need to reinvent themselves fast.

In this video you’ll discover:

  • How AI tutors could deliver personalized learning at scale
  • Why credentials may shift from “degrees” to proof-of-skill portfolios
  • What happens when the “middle” of studying becomes automated
  • How universities could evolve: research hubs, networks, and high-trust credentialing
  • The risks: cheating, dependency, bias, and widening inequality
  • The 3 skills that become priceless when information is everywhere: judgment, curiosity, and responsibility

From DSC:
There appears to be another, similar video, but with a different date and length of the video. So I’m including this other recording as well here:


The End of Universities as We Know Them: What AI Is Bringing

Premiered Jan 27, 2026

What if universities don’t “disappear”… but lose their monopoly on learning, credentials, and opportunity?

AI is turning education into something radically different: personal, instant, adaptive, and always available. When every student can have a 24/7 tutor, a writing coach, a coding partner, and a study plan designed specifically for them, the old model—one professor, one curriculum, one pace for everyone—starts to look outdated. And the biggest disruption isn’t the classroom. It’s the credential. Because in an AI world, proof of skill can become more valuable than a piece of paper.

This video explores the end of universities as we know them: what AI is bringing, what will break, what will survive, and what replaces the traditional path.

In this video you’ll discover:

  • Why AI tutoring could outperform one-size-fits-all lectures
  • How “degrees” may shift into skill proof: portfolios, projects, and verified competency
  • What happens when the “middle” of studying becomes automated
  • How universities may evolve: research hubs, networks, high-trust credentialing
  • The dark side: cheating, dependency, inequality, and biased evaluation
  • The new advantage: judgment, creativity, and responsibility in a world of instant answers
 
 

How Your Learners *Actually* Learn with AI — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 37.5 million AI chats show us about how learners use AI at the end of 2025 — and what this means for how we design & deliver learning experiences in 2026

Last week, Microsoft released a similar analysis of a whopping 37.5 million Copilot conversations. These conversation took place on the platform from January to September 2025, providing us with a window into if and how AI use in general — and AI use among learners specifically – has evolved in 2025.

Microsoft’s mass behavioural data gives us a detailed, global glimpse into what learners are actually doing across devices, times of day and contexts. The picture that emerges is pretty clear and largely consistent with what OpenAI’s told us back in the summer:

AI isn’t functioning primarily as an “answers machine”: the majority of us use AI as a tool to personalise and differentiate generic learning experiences and – ultimately – to augment human learning.

Let’s dive in!

Learners don’t “decide” to use AI anymore. They assume it’s there, like search, like spellcheck, like calculators. The question has shifted from “should I use this?” to “how do I use this effectively?”


8 AI Agents Every HR Leader Needs To Know In 2026 — from forbes.com by Bernard Marr

So where do you start? There are many agentic tools and platforms for AI tasks on the market, and the most effective approach is to focus on practical, high-impact workflows. So here, I’ll look at some of the most compelling use cases, as well as provide an overview of the tools that can help you quickly deliver tangible wins.

Some of the strongest opportunities in HR include:

  • Workforce management, administering job satisfaction surveys, monitoring and tracking performance targets, scheduling interventions, and managing staff benefits, medical leave, and holiday entitlement.
  • Recruitment screening, automatically generating and posting job descriptions, filtering candidates, ranking applicants against defined criteria, identifying the strongest matches, and scheduling interviews.
  • Employee onboarding, issuing new hires with contracts and paperwork, guiding them to onboarding and training resources, tracking compliance and completion rates, answering routine enquiries, and escalating complex cases to human HR specialists.
  • Training and development, identifying skills gaps, providing self-service access to upskilling and reskilling opportunities, creating personalized learning pathways aligned with roles and career goals, and tracking progress toward completion.

 

 

People Watched 700 Million Hours of YouTube Podcasts on TV in October — from bloomberg.com (this article is behind a paywall)

  • That’s up from 400 million hours a year ago as podcasts become the new late-night TV.
  • YouTube Wins Over TV Audience With Video Podcasts.
  • YouTube is dominating in the living room.
 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 

Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six evidence-based use cases to try in Google’s latest image-generating AI tool

While it’s true that Nano Banana generates better infographics than other AI models, the conversation has so far massively under-sold what’s actually different and valuable about this tool for those of us who design learning experiences.

What this means for our workflow:

Instead of the traditional “commission ? wait ? tweak ? approve ? repeat” cycle, Nano Banana enables an iterative, rapid-cycle design process where you can:

  • Sketch an idea and see it refined in minutes.
  • Test multiple visual metaphors for the same concept without re-briefing a designer.
  • Build 10-image storyboards with perfect consistency by specifying the constraints once, not manually editing each frame.
  • Implement evidence-based strategies (contrasting cases, worked examples, observational learning) that are usually too labour-intensive to produce at scale.

This shift—from “image generation as decoration” to “image generation as instructional scaffolding”—is what makes Nano Banana uniquely useful for the 10 evidence-based strategies below.

 


 


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 
© 2025 | Daniel Christian