Planning Your L&D Hiring for Next Year? Start With Skills, Salary Ranges, and Realistic Expectations — from teamedforlearning.com

Salary transparency laws across many states now require organizations to publish compensation ranges. While this can feel like a burden, the truth is: transparency can dramatically speed up hiring. Candidates self-select, mismatches decrease, and teams save time.

But transparency only works when the salary range itself is grounded in reality. And that’s where many organizations struggle.

Posting a salary range is the easy part.
Determining a fair, defensible range is where the work happens.

Also from Teamed for Learning, see:

Hiring Trends For 2026 
The learning industry shifts fast, and this year is no exception. Here’s what’s shaping the hiring landscape right now:

  • AI is now a core skill, not a bonus
  • Project management is showing up in every job description
  • Generalists with business awareness are beating tool-heavy candidates
  • Universities and edtech companies are speeding up content refresh cycles
  • Hiring budgets are tight – but expectations aren’t easing up
 
 
 

AI Is Quietly Rewiring the ADDIE Model (In a Good Way) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The traditional ADDIE workflow isn’t dead, but it is evolving

The real story isn’t what AI can produce — it’s how it changes the decisions we make at every stage of instructional design.

After working with thousands of instructional designers on my bootcamp, I’ve learned something counterintuitive: the best teams aren’t the ones with the fanciest AI tools — they’re the ones who know when to use which mode—and when to use none at all.

Once you recognise that, you start to see instructional design differently — not as a linear process, but as a series of decision loops where AI plays distinct roles.

In this post, I show you the 3 modes of AI that actually matter in instructional design — and map them across every phase of ADDIE so you know exactly when to let AI run, and when to slow down and think.


Also see:

Generative AI for Course Design: Writing Effective Prompts for Multiple Choice Question Development — from onlineteaching.umich.edu by Hedieh Najafi

In higher education, developing strong multiple-choice questions can be a time-intensive part of the course design process. Developing such items requires subject-matter expertise and assessment literacy, and for faculty and designers who are creating and producing online courses, it can be difficult to find the capacity to craft quality multiple-choice questions.

At the University of Michigan Center for Academic Innovation, learning experience designers are using generative artificial intelligence to streamline the multiple-choice question development process and help ameliorate this issue. In this article, I summarize one of our projects that explored effective prompting strategies to develop multiple-choice questions with ChatGPT for our open course portfolio. We examined how structured prompting can improve the quality of AI-generated assessments, producing relevant comprehension and recall items and options that include plausible distractors.

Achieving this goal enables us to develop several ungraded practice opportunities, preparing learners for their graded assessments while also freeing up more time for course instructors and designers.

 

Which AI Video Tool Is Most Powerful for L&D Teams? — from by Dr. Philippa Hardman
Evaluating four popular AI video generation platforms through a learning-science lens

Happy new year! One of the biggest L&D stories of 2025 was the rise to fame among L&D teams of AI video generator tools. As we head into 2026, platforms like Colossyan, Synthesia, HeyGen, and NotebookLM’s video creation feature are firmly embedded in most L&D tech stacks. These tools promise rapid production and multi-language output at significantly reduced costs —and they deliver on a lot of that.

But something has been playing on my mind: we rarely evaluate these tools on what matters most for learning design—whether they enable us to build instructional content that actually enables learning.

So, I spent some time over the holiday digging into this question: do the AI video tools we use most in L&D create content that supports substantive learning?

To answer it, I took two decades of learning science research and translated it into a scoring rubric. Then I scored the four most popular AI video generation platforms among L&D professionals against the rubric.
.

 


For an AI-based tool or two — as they regard higher ed — see:

5 new tools worth trying — from wondertools.substack.com by Jeremy Kaplan

YouTube to NotebookLM: Import a Whole Playlist or Channel in One Click
YouTube to NotebookLM is a remarkably useful new Chrome extension that lets you bulk-add any YouTube playlists, channels, or search results into NotebookLM. for AI-powered analysis.

What to try

  • Find or create YouTube playlists on topics of interest. Then use this extension to ingest those playlists into NotebookLM. The videos are automatically indexed, and within minutes you can create reports, slides, and infographics to enhance your learning.
  • Summarize a playlist or channel with an audio or video overview. Or create quizzes, flash cards, data tables, or mind maps to explore a batch of YouTube videos. Or have a chat in NotebookLM with your favorite video channel. Check my recent post for some YouTube channels to try.
 

Corporate Training Solutions That Actually Improve Performance — from blog.upsidelearning.com by Unnati Umare

Designing Learning Around Performance in the Flow of Work
Once it becomes clear that completion does not reliably translate into changed behavior, the next question tends to surface on its own. If training is not failing outright, then what it should be designed around becomes harder to ignore.

In most organizations, the answer remains content. Content is easier to define, easier to build, and easier to track, even when it explains very little about how work actually gets done.

Performance-aligned learning design shifts that starting point by paying closer attention to how work unfolds in practice. Instead of organizing learning around topics or courses, design decisions begin with what a role requires people to notice, decide, and act on during real situations.  

 
 

10 Elearning Interaction Ideas You May Not Have Thought Of Yet — from theelearningcoach.com by Connie Malamed

Is click-to-reveal always bad for learning?
Not necessarily. Click-to-reveal interactions can be useful when you want to manage cognitive load, reveal information gradually, or work within limited screen space. In those cases, clicking supports the presentation of information.

However, from an instructional perspective, clicking alone does not make an interaction meaningful. An interaction adds value when it asks learners to think, not just trigger more content.

The interaction ideas below require learners to analyze, judge, predict, or diagnose. These are the types of mental actions learners perform in real work settings. Each one includes a short, real-world question to show how the idea might be used in practice.

 

Community colleges are training the next generation of manufacturing workers — from manufacturingdive.com by Michelle No
Rutgers University explored how community colleges are responding to regional workforce training demands. Clark State College and Columbus State Community College are among those leading the way.

One underrated option may hold the most promise for workforce growth: the local community college.

That’s according to a series of reports by The Rutgers Education and Employment Research Center released in October, which examines the “hidden innovative structure” of America’s community colleges.

Community colleges excel in ways conducive to a successful manufacturing career, said Shalin Jyotishi, founder of the Future of Work & Innovation Economy Initiative at think tank New America.

 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 

Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six evidence-based use cases to try in Google’s latest image-generating AI tool

While it’s true that Nano Banana generates better infographics than other AI models, the conversation has so far massively under-sold what’s actually different and valuable about this tool for those of us who design learning experiences.

What this means for our workflow:

Instead of the traditional “commission ? wait ? tweak ? approve ? repeat” cycle, Nano Banana enables an iterative, rapid-cycle design process where you can:

  • Sketch an idea and see it refined in minutes.
  • Test multiple visual metaphors for the same concept without re-briefing a designer.
  • Build 10-image storyboards with perfect consistency by specifying the constraints once, not manually editing each frame.
  • Implement evidence-based strategies (contrasting cases, worked examples, observational learning) that are usually too labour-intensive to produce at scale.

This shift—from “image generation as decoration” to “image generation as instructional scaffolding”—is what makes Nano Banana uniquely useful for the 10 evidence-based strategies below.

 


 


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 

AI’s Role in Online Learning > Take It or Leave It with Michelle Beavers, Leo Lo, and Sara McClellan — from intentionalteaching.buzzsprout.com by Derek Bruff

You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand.




 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 
© 2025 | Daniel Christian