Amid AI and Labor Market Changes, Companies Look to Grow Their Own Skilled Workers — from workshift.org by Colleen Connolly

The explosion of artificial intelligence, combined with slowing growth in the labor force, has many companies reconsidering how they hire and develop workers. Where they once relied on colleges and universities for training, a growing number of companies are now looking in-house.

Investment in developing employees and would-be hires is becoming a key differentiator for companies, according to a new report from the Learning Society, a collaborative effort led out of the Stanford Center on Longevity. And that’s true even as AI adoption grows.

The Big Idea: The report authors interviewed 15 human resources executives from major firms, which ranged in size from Hubbell, an electric and utility product manufacturer with about 17K employees, to Walmart with more than 2M employees. The authors asked about four topics: the impact of AI and technology on work, skill building and talent development, supporting workers over longer working lives, and new partnerships between businesses and higher education.

 

FutureFit AI — helping build reskilling, demand-driven, employment, sector-based, and future-fit pathways, powered by AI
.


The above item was from Paul Fain’s recent posting, which includes the following excerpt:

The platform is powered by FutureFit AI, which is contributing the skills-matching infrastructure and navigation layer. Jobseekers get personalized recommendations for best-fit job roles as well as education and training options—including internships—that can help them break into specific careers. The project also includes a focus on providing support students need to complete their training, including scholarships and help with childcare and transportation.

 

The Learning and Employment Records (LER) Report for 2026: Building the infrastructure between learning and work — from smartresume.com; with thanks to Paul Fain for this resource

Executive Summary (excerpt)

This report documents a clear transition now underway: LERs are moving from small experiments to systems people and organizations expect to rely on. Adoption remains early and uneven, but the forces reshaping the ecosystem are no longer speculative. Federal policy signals, state planning cycles, standards maturation, and employer behavior are aligning in ways that suggest 2026 will mark a shift from exploration to execution.

Across interviews with federal leaders, state CIOs, standards bodies, and ecosystem builders, a consistent theme emerged: the traditional model—where institutions control learning and employment records—no longer fits how people move through education and work. In its place, a new model is being actively designed—one in which individuals hold portable, verifiable records that systems can trust without centralizing control.

Most states are not yet operating this way. But planning timelines, RFP language, and federal signals indicate that many will begin building toward this model in early 2026.

As the ecosystem matures, another insight becomes unavoidable: records alone are not enough. Value emerges only when trusted records can be interpreted through shared skill languages, reused across contexts, and embedded into the systems and marketplaces where decisions are made.

Learning and Employment Records are not a product category. They are a data layer—one that reshapes how learning, work, and opportunity connect over time.

This report is written for anyone seeking to understand how LERs are beginning to move from concept to practice. Whether readers are new to the space or actively exploring implementation, the report focuses on observable signals, emerging patterns, and the practical conditions required to move from experimentation toward durable infrastructure.

 

“The building blocks for a global, interoperable skills ecosystem are already in place. As education and workforce alignment accelerates, the path toward trusted, machine-readable credentials is clear. The next phase depends on credentials that carry value across institutions, industries, states, and borders; credentials that move with learners wherever their education and careers take them. The question now isn’t whether to act, but how quickly we move.”

– Curtiss Barnes, Chief Executive Officer, 1EdTech

 


The above item was from Paul Fain’s recent posting, which includes the following excerpt:

SmartResume just published a guide for making sense of this rapidly expanding landscape. The LER Ecosystem Report was produced in partnership with AACRAO, Credential Engine, 1EdTech, HR Open Standards, and the U.S. Chamber of Commerce Foundation. It was based on interviews and feedback gathered over three years from 100+ leaders across education, workforce, government, standards bodies, and tech providers.

The tools are available now to create the sort of interoperable ecosystem that can make talent marketplaces a reality, the report argues. Meanwhile, federal policy moves and bipartisan attention to LERs are accelerating action at the state level.

“For state leaders, this creates a practical inflection point,” says the report. “LERs are shifting from an innovation discussion to an infrastructure planning conversation.”

 
 
 

Global list of over 100 L&D conferences in 2026 — from donaldhtaylor.co.uk by Don Taylor

I’m a firm believer in conferences. This isn’t just because I have chaired the Learning Technologies Conference in London since 2000. It’s because they are invaluable in sustaining our community. So many in Learning and Development work alone or in small teams, that building and maintaining personal contacts is crucial.For a number of years, I have kept a personal list of the Learning and Development conferences running internationally. This year, I thought it would be helpful to  share it.

 

 

Planning Your L&D Hiring for Next Year? Start With Skills, Salary Ranges, and Realistic Expectations — from teamedforlearning.com

Salary transparency laws across many states now require organizations to publish compensation ranges. While this can feel like a burden, the truth is: transparency can dramatically speed up hiring. Candidates self-select, mismatches decrease, and teams save time.

But transparency only works when the salary range itself is grounded in reality. And that’s where many organizations struggle.

Posting a salary range is the easy part.
Determining a fair, defensible range is where the work happens.

Also from Teamed for Learning, see:

Hiring Trends For 2026 
The learning industry shifts fast, and this year is no exception. Here’s what’s shaping the hiring landscape right now:

  • AI is now a core skill, not a bonus
  • Project management is showing up in every job description
  • Generalists with business awareness are beating tool-heavy candidates
  • Universities and edtech companies are speeding up content refresh cycles
  • Hiring budgets are tight – but expectations aren’t easing up
 
 
 

AI Is Quietly Rewiring the ADDIE Model (In a Good Way) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The traditional ADDIE workflow isn’t dead, but it is evolving

The real story isn’t what AI can produce — it’s how it changes the decisions we make at every stage of instructional design.

After working with thousands of instructional designers on my bootcamp, I’ve learned something counterintuitive: the best teams aren’t the ones with the fanciest AI tools — they’re the ones who know when to use which mode—and when to use none at all.

Once you recognise that, you start to see instructional design differently — not as a linear process, but as a series of decision loops where AI plays distinct roles.

In this post, I show you the 3 modes of AI that actually matter in instructional design — and map them across every phase of ADDIE so you know exactly when to let AI run, and when to slow down and think.


Also see:

Generative AI for Course Design: Writing Effective Prompts for Multiple Choice Question Development — from onlineteaching.umich.edu by Hedieh Najafi

In higher education, developing strong multiple-choice questions can be a time-intensive part of the course design process. Developing such items requires subject-matter expertise and assessment literacy, and for faculty and designers who are creating and producing online courses, it can be difficult to find the capacity to craft quality multiple-choice questions.

At the University of Michigan Center for Academic Innovation, learning experience designers are using generative artificial intelligence to streamline the multiple-choice question development process and help ameliorate this issue. In this article, I summarize one of our projects that explored effective prompting strategies to develop multiple-choice questions with ChatGPT for our open course portfolio. We examined how structured prompting can improve the quality of AI-generated assessments, producing relevant comprehension and recall items and options that include plausible distractors.

Achieving this goal enables us to develop several ungraded practice opportunities, preparing learners for their graded assessments while also freeing up more time for course instructors and designers.

 

Which AI Video Tool Is Most Powerful for L&D Teams? — from by Dr. Philippa Hardman
Evaluating four popular AI video generation platforms through a learning-science lens

Happy new year! One of the biggest L&D stories of 2025 was the rise to fame among L&D teams of AI video generator tools. As we head into 2026, platforms like Colossyan, Synthesia, HeyGen, and NotebookLM’s video creation feature are firmly embedded in most L&D tech stacks. These tools promise rapid production and multi-language output at significantly reduced costs —and they deliver on a lot of that.

But something has been playing on my mind: we rarely evaluate these tools on what matters most for learning design—whether they enable us to build instructional content that actually enables learning.

So, I spent some time over the holiday digging into this question: do the AI video tools we use most in L&D create content that supports substantive learning?

To answer it, I took two decades of learning science research and translated it into a scoring rubric. Then I scored the four most popular AI video generation platforms among L&D professionals against the rubric.
.

 


For an AI-based tool or two — as they regard higher ed — see:

5 new tools worth trying — from wondertools.substack.com by Jeremy Kaplan

YouTube to NotebookLM: Import a Whole Playlist or Channel in One Click
YouTube to NotebookLM is a remarkably useful new Chrome extension that lets you bulk-add any YouTube playlists, channels, or search results into NotebookLM. for AI-powered analysis.

What to try

  • Find or create YouTube playlists on topics of interest. Then use this extension to ingest those playlists into NotebookLM. The videos are automatically indexed, and within minutes you can create reports, slides, and infographics to enhance your learning.
  • Summarize a playlist or channel with an audio or video overview. Or create quizzes, flash cards, data tables, or mind maps to explore a batch of YouTube videos. Or have a chat in NotebookLM with your favorite video channel. Check my recent post for some YouTube channels to try.
 

Corporate Training Solutions That Actually Improve Performance — from blog.upsidelearning.com by Unnati Umare

Designing Learning Around Performance in the Flow of Work
Once it becomes clear that completion does not reliably translate into changed behavior, the next question tends to surface on its own. If training is not failing outright, then what it should be designed around becomes harder to ignore.

In most organizations, the answer remains content. Content is easier to define, easier to build, and easier to track, even when it explains very little about how work actually gets done.

Performance-aligned learning design shifts that starting point by paying closer attention to how work unfolds in practice. Instead of organizing learning around topics or courses, design decisions begin with what a role requires people to notice, decide, and act on during real situations.  

 
 

How Your Learners *Actually* Learn with AI — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 37.5 million AI chats show us about how learners use AI at the end of 2025 — and what this means for how we design & deliver learning experiences in 2026

Last week, Microsoft released a similar analysis of a whopping 37.5 million Copilot conversations. These conversation took place on the platform from January to September 2025, providing us with a window into if and how AI use in general — and AI use among learners specifically – has evolved in 2025.

Microsoft’s mass behavioural data gives us a detailed, global glimpse into what learners are actually doing across devices, times of day and contexts. The picture that emerges is pretty clear and largely consistent with what OpenAI’s told us back in the summer:

AI isn’t functioning primarily as an “answers machine”: the majority of us use AI as a tool to personalise and differentiate generic learning experiences and – ultimately – to augment human learning.

Let’s dive in!

Learners don’t “decide” to use AI anymore. They assume it’s there, like search, like spellcheck, like calculators. The question has shifted from “should I use this?” to “how do I use this effectively?”


8 AI Agents Every HR Leader Needs To Know In 2026 — from forbes.com by Bernard Marr

So where do you start? There are many agentic tools and platforms for AI tasks on the market, and the most effective approach is to focus on practical, high-impact workflows. So here, I’ll look at some of the most compelling use cases, as well as provide an overview of the tools that can help you quickly deliver tangible wins.

Some of the strongest opportunities in HR include:

  • Workforce management, administering job satisfaction surveys, monitoring and tracking performance targets, scheduling interventions, and managing staff benefits, medical leave, and holiday entitlement.
  • Recruitment screening, automatically generating and posting job descriptions, filtering candidates, ranking applicants against defined criteria, identifying the strongest matches, and scheduling interviews.
  • Employee onboarding, issuing new hires with contracts and paperwork, guiding them to onboarding and training resources, tracking compliance and completion rates, answering routine enquiries, and escalating complex cases to human HR specialists.
  • Training and development, identifying skills gaps, providing self-service access to upskilling and reskilling opportunities, creating personalized learning pathways aligned with roles and career goals, and tracking progress toward completion.

 

 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 
© 2025 | Daniel Christian