Business leaders across the world are grappling with a reality that would have seemed like science fiction just a few decades ago: Artificial intelligence systems dubbed AI agents are becoming colleagues, not just tools. At many organizations, HR pros are already developing balanced and thoughtful machine-people workforces that meet business goals.
At Skillsoft, a global corporate learning company, Chief People Officer Ciara Harrington has spent the better part of three years leading digital transformation in real time. Through her front-row seat to CEO transitions, strategic pivots and the rapid acceleration of AI adoption, she’s developed a strong belief that organizations must be agile with people operations.
‘No role that’s not a tech role’ Under these modern conditions, she says, technology is becoming a common language in the workplace. “There is no role that’s not a tech role,” Harrington said during a recent discussion about the future of work. It’s a statement that gets at the heart of a shift many HR leaders are still coming to terms with.
…
But a key question remains: Who will manage the AI agents, specifically, HR leaders or someone else?
What’s changing is not the foundation—it’s the ecosystem. Teams are looking to create more flexible, scalable, and diverse learning experiences that meet people where they are.
What Did We Explore? Everyone seems to have a take on what’s happening in L&D these days. From bold claims about six-figure roles to debates over whether portfolios or degrees matter more, everyone seems to have a take. So, we wanted to get to the heart of it by exploring five of the biggest, most debated areas shaping our work today:
Salaries: Are compensation trends really keeping pace with the value we deliver?
Hiring: What skills are managers actually looking for—and are those ATS horror stories true?
Portfolios: Are portfolios helping candidates stand out, and what are hiring managers actually looking for?
Tools & Modalities: What types of training are teams building, and what tools are they using to build it?
Artificial Intelligence: Who’s using it, how, and what concerns still exist?
These five areas are shaping the future of instructional design—not just for job seekers, but for team leaders, hiring managers, and the entire ecosystem of L&D professionals.
The takeaway? A portfolio is more than a collection of projects—it’s a storytelling tool. The ones that stand out highlight process, decision-making, and results—not just pretty screens.
Learn something new. Map out a personalized curriculum
Try this: Give an AI assistant context about what you want to learn, why, and how.
Detail your rationale and motivation, which may impact your approach.
Note your current knowledge or skill level, ideally with examples.
Summarize your learning preferences
Note whether you prefer to read, listen to, or watch learning materials.
Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of Perplexity, ChatGPT, Gemini or Claude. They can also summarize research literature about effective learning tactics.
If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.
GPT-5 for Instructional Designers — from drphilippahardman.substack.com by Dr Philippa Hardman 10 Hacks to Work Smarter & Safer with OpenAI’s Latest Model
The TLDR is that as Instructional Designers, we can’t afford to miss some of the very real benefits of GPT-5’s potential, but we also can’t ensure our professional standards or learner outcomes if we blindly accept its outputs without due testing and validation.
For this reason, I decided to synthesise the latest GPT-5 research—from OpenAI’s technical documentation to independent security audits to real-world user testing—into 10 essential reality checks for using GPT-5 as an Instructional Designer.
These aren’t theoretical exercises; they’re practical tests designed to help you safely unlock GPT-5’s benefits while identifying and mitigating its most well-documented limitations.
While I regularly use tools like ChatGPT, Grammarly, Microsoft Copilot, and even YouTube Premium (I would cancel Netflix before this), Perplexity has earned a top spot in my toolkit. It blends AI and real-time web search into one seamless, research-driven platform that saves time and improves the quality of information I rely on every day.
Another major AI lab just launched “education mode.”
Google introduced Guided Learningin Gemini, transforming it into a personalized learning companion designed to help you move from quick answers to real understanding.
Instead of immediately spitting out solutions, it:
Asks probing, open-ended questions
Walks learners through step-by-step reasoning
Adapts explanations to the learner’s level
Uses visuals, videos, diagrams, and quizzes to reinforce concepts
I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.
My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.
Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.
How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.
AI Tools for Strategy and Research – GT #32 — from goodtools.substack.com by Robin Good Getting expert advice, how to do deep research with AI, prompt strategy, comparing different AIs side-by-side, creating mini-apps and an AI Agent that can critically analyze any social media channel
In this week’s blog post, I’ll share my take on how the instructional design role is evolving and discuss what this means for our day-to-day work and the key skills it requires.
…
With this in mind, I’ve been keeping a close eye on open instructional design roles and, in the last 3 months, have noticed the emergence of a new flavour of instructional designer: the so-called “Generative AI Instructional Designer.”
Let’s deep dive into three explicitly AI-focused instructional design positions that have popped up in the last quarter. Each one illuminates a different aspect of how the role is changing—and together, they paint a picture of where our profession is likely heading.
Designers who evolve into prompt engineers, agent builders, and strategic AI advisors will capture the new premium. Those who cling to traditional tool-centric roles may find themselves increasingly sidelined—or automated out of relevance.
Google’s parent company announced Wednesday (8/6/25) that it’s planning to spend $1 billion over the next three years to help colleges teach and train students about artificial intelligence.
Google is joining other AI companies, including OpenAI and Anthropic, in investing in AI training in higher education. All three companies have rolled out new tools aimed at supporting “deeper learning” among students and made their AI platforms available to certain students for free.
Based on current technology capabilities, adoption patterns, and the mission of community colleges, here are five well-supported predictions for AI’s impact in the coming years.
Partnerships to make higher education work for the workforce — from timeshighereducation.com by Brooke Wilson Fostering long-term industry partners can enhance student outcomes and prepare them for the workplace of the future. Here’s how to get the best out of them
As the pace of change accelerates across all industries, higher education institutions face increasing pressure to ensure their graduates are prepared for the workplace demands of today – and tomorrow. Cultivating meaningful partnerships with industry is no longer optional; it’s necessary.
From curriculum co-design to experiential learning, universities can collaborate with businesses and industries in several ways to enhance student outcomes and strengthen regional economies.
The keys to strong university–non-profit partnerships — from timeshighereducation.com by Mariana Leyva, Martha Sáenz, and Itzel Eguiluz Collaborative projects between universities and non-profits nurture empathy and allow students to make a real-world impact. Here, three educators share their tips for building meaningful partnerships that benefit students and communities alike
Collaborative projects between universities and non-profits nurture empathy and allow students to make a real-world impact. Here, three educators share their tips for building meaningful partnerships that benefit students and communities alike.
What is Study Mode?
Study Mode is OpenAI’s take on a smarter study partner – a version of the ChatGPT experience designed to guide users through problems with Socratic prompts, scaffolded reasoning, and adaptive feedback (instead of just handing over the answer).
Built with input from learning scientists, pedagogy experts, and educators, it was also shaped by direct feedback from college students. While Study Mode is designed with college students in mind, it’s meant for anyone who wants a more learning-focused, hands-on experience across a wide range of subjects and skill levels.
Who can access it? And how?
Starting July 29, Study Mode is available to users on Free, Plus, Pro, and Team plans. It will roll out to ChatGPT Edu users in the coming weeks.
ChatGPT became your tutor— from theneurondaily.com by Grant Harvey PLUS: NotebookLM has video now & GPT 4o-level AI runs on laptop
Here’s how it works: instead of asking “What’s 2+2?” and getting “4,” study mode asks questions like “What do you think happens when you add these numbers?” and “Can you walk me through your thinking?” It’s like having a patient tutor who won’t let you off the hook that easily.
The key features include:
Socratic questioning: It guides you with hints and follow-up questions rather than direct answers.
Scaffolded responses: Information broken into digestible chunks that build on each other.
Personalized support: Adjusts difficulty based on your skill level and previous conversations.
Knowledge checks: Built-in quizzes and feedback to make sure concepts actually stick.
Toggle flexibility: Switch study mode on and off mid-conversation depending on your goals.
Try study mode yourself by selecting “Study and learn” from tools in ChatGPT and asking a question.
Introducing study mode— from openai.com A new way to learn in ChatGPT that offers step by step guidance instead of quick answers.
[On 7/29/25, we introduced] study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.
ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?
We’ve built study mode to help answer this question. When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something.
A Digital Shift in Law
In 2025, LegalTech isn’t a trend—it’s a standard. Tools like client dashboards, e-signatures, AI legal assistants, and automated case tracking are making law firms more efficient and more transparent. These systems also help reduce errors and save time. For clients, it means less confusion and more control.
For example, immigration law—a field known for paperwork and long processing times—is being transformed through tech. Clients now track their case status online, receive instant updates, and even upload key documents from their phones. Lawyers, meanwhile, use AI tools to spot issues faster, prepare filings quicker, and manage growing caseloads without dropping the ball.
Loren Locke, Founder of Locke Immigration Law, explains how tech helps simplify high-stress cases:
“As a former consular officer, I know how overwhelming the visa process can feel. Now, we use digital tools to break down each step for our clients—timelines, checklists, updates—all in one place. One client recently told me it was the first time they didn’t feel lost during their visa process. That’s why I built my firm this way: to give people clarity when they need it most.”
While not so much legaltech this time, Jordan’s article below is an excellent, highly relevant posting for what we are going through — at least in the United States:
What are lawyers for? — from jordanfurlong.substack.com by Jordan Furlong We all know lawyers’ commercial role, to be professional guides for human affairs. But we also need lawyers to bring the law’s guarantees to life for people and in society. And we need it right now.
The question “What are lawyers for?” raises another, prior and more foundational question: “What is the law for?”
…
But there’s more. The law also exists to regulate power in a society: to structure its distribution, create processes for its implementation, and place limits on its application. In a healthy society, power flows through the law, not around it. Certainly, we need to closely examine and evaluate those laws — the exercise of power through a biased or corrupted system will be illegitimate even if it’s “lawful.” But as a general rule, the law is available as a check on the arbitrary exercise of power, whether by a state authority or a private entity.
And above these two aspects of law’s societal role, I believe there’s also a third: to serve as a kind of “moral architecture” of society.
Blood in the Instructional Design Machine?— from drphilippahardman.substack.com by Dr. Philippa Hardman The reality of AI, job degradation & the likely future of Instructional Design
This raises a very important, perhaps even existential question for our profession: do these tools free a designer from the mind-numbing drudgery of content conversion (the “augmented human”)? Or do they automate the core expertise of the learning professional’s role, e.g. selecting instructional startegies, structuring narratives and designing a learning flow, in the process reducing the ID’s role to simply finding the source file and pushing a button (the “inverted centaur”)?
The stated aspiration of these tool builders seems to be a future where AI means that the instructional designer’s value shifts decisively from production to strategy. Their stated goal is to handle the heavy lifting of content generation, allowing the human ID to provide the indispensable context, creativity, and pedagogical judgment that AI cannot replicate.
However, the risk of these tools lies in how we use them, and the “inverted centaur” model remains deeply potent and possible. In an organisation that prioritises cost above all, these same tools can be used to justify reducing the ID role to the functional drudgery of inputting a PDF and supervising the machine.
The key to this paradox lies in a crucial data point: spending on outside products and services has jumped a dramatic 23% to $12.4 billion.
This signals a fundamental shift: companies are reallocating funds from large internal teams toward specialised consultants and advanced learning technologies like AI. L&D is not being de-funded; it is being re-engineered.
July 23, 2025 – Lightcast, the global leader in labor market intelligence, today released “Beyond the Buzz: Developing the AI Skills Employers Actually Need,” a comprehensive analysis revealing that artificial intelligence has fundamentally transformed hiring patterns across the world of work. The report, based on analysis of over 1.3 billion job postings, shows that job postings including AI skills offer 28% higher salaries—nearly $18,000 more per year—than those without such capabilities.
More importantly, the research analyzes specific skills based on their growth across job postings, their importance in the workforce, and their exposure to AI. This shows exactly which AI skills create value in which contexts, solving the critical challenge facing educators and workforce development leaders: moving beyond vague “AI literacy” to precise, targeted training that delivers measurable results.
Also via Paul Fain:
Addressing the Barriers Blocking Employee Development — from gallup.com by Corey Tatel and Megan Mulherin In 2024, less than half of U.S. employees participated in any education or training for their current job.
Despite growing awareness, however, participation in skill development is limited. In 2024, less than half of U.S. employees (45%) participated in training or education to build new skills for their current job. About one in three employees (32%) who are hoping to move into a new role within the next year strongly agree that they have the skills needed to be exceptional in that role.
Building a learning ecosystem that drives business results — from chieflearningofficer.com by Nick Romanowski How SAX combined adaptive e-learning and experiential workshops to accelerate capability development and impact the bottom line.
At SAX, we know that to succeed in today’s market, we need professionals who can learn quickly, apply that learning effectively and continuously adapt as client needs evolve.
Yet traditional training methods were no longer enough. Our firm faced familiar challenges: helping staff meet continuing professional education requirements efficiently, uncovering knowledge gaps to guide development and building a more capable, more client-ready workforce.
We found our solution in a flipped learning model that blends adaptive e-learning with live, experiential workshops. The results were transformative. We accelerated CPE credit completion by more than 50 percent, reclaimed 173 billable hours and equipped our people with deeper capabilities.
Here’s how we did it, and what we learned along the way.
Blend technology and human touch: Adaptive e-learning addresses individual knowledge gaps efficiently. Live workshops enable skill development through practice and feedback. Together, they drive both learning efficiency and behavior change.
AI is rewiring how we learn, and it’s a game-changer for L&D— from chieflearningofficer.com by Josh Bersin As AI becomes central to learner engagement, L&D leaders are being urged to fundamentally rethink corporate training, says global industry analyst Josh Bersin.
What are people really doing with ChatGPT? They’re learning. They’re asking questions, getting immediate answers, digging deeper, analyzing information and ultimately making themselves more productive. So, one could argue that simply by shifting to a “learn by inquiry” model, we may triple our value to the business.
From my experience, there are two main learning models in this industry. The first is “what you need to know”—linear or prescriptive things that every employee needs to understand about the company, its products and their role. This kind of content is well handled by existing L&D models.
The second, and far more important, is “what you’d like to know”—questions, curiosities and explorations about how the company works, what customers truly need and how we can each go further in our careers. Thanks to AI, this kind of learning is now explosive and transformative.
Imagine a sales rep who loses a deal. Naturally, they may ask, “What could I have done to be more successful?” A well-designed AI-powered learning system would take that question, give the employee an initial answer and chat with the individual to dig into the problem.
The system would then surface relevant sales training material and recommend videos, tips or case studies for help. And the employee, assuming they like the experience, would likely keep exploring until they feel they’ve learned what they need.
This “curiosity-based” learning is now possible, and its benefits extend far beyond traditional training.
For today’s chief learning officer, the days of just rolling out compliance training are long gone. In 2025, learning and development leaders are architects of innovation, crafting ecosystems that are agile, automated and AI-infused. This quarter’s Tech Check invites us to pause, assess and get strategic about where tech is taking us. Because the goal isn’t more tools—it’s smarter, more human learning systems that scale with the business.
Sections include:
The state of AI in L&D: Hype vs. reality
AI in design: From static content to dynamic experiences
AI in development: Redefining production workflows
NEW YORK – The AFT, alongside the United Federation of Teachers and lead partner Microsoft Corp., founding partner OpenAI, and Anthropic, announced the launch of the National Academy for AI Instruction today. The groundbreaking $23 million education initiative will provide access to free AI training and curriculum for all 1.8 million members of the AFT, starting with K-12 educators. It will be based at a state-of-the-art bricks-and-mortar Manhattan facility designed to transform how artificial intelligence is taught and integrated into classrooms across the United States.
The academy will help address the gap in structured, accessible AI training and provide a national model for AI-integrated curriculum and teaching that puts educators in the driver’s seat.
In an era when the college-going rate of high school graduates has dropped from an all-time high of 70 percent in 2016 to roughly 62 percent now, AI seems to be heightening the anxieties about the value of college.
According to the survey, two-thirds of parents say AI is impacting their view of the value of college. Thirty-seven percent of parents indicate they are now scrutinizing college’s “career-placement outcomes”; 36 percent say they are looking at a college’s “AI-skills curriculum,” while 35 percent respond that a “human-skills emphasis” is important to them.
This echoes what I increasingly hear from college leadership: Parents and students demand to see a difference between what they are getting from a college and what they could be “learning from AI.”
Culture matters here. Organizations that foster psychological safety—where experimentation is welcomed and mistakes are treated as learning—are making the most progress. When leaders model curiosity, share what they’re trying, and invite open dialogue, teams follow suit. Small tests become shared wins. Shared wins build momentum.
Career development must be part of this equation. As roles evolve, people will need pathways forward. Some will shift into new specialties. Others may leave familiar roles for entirely new ones. Making space for that evolution—through upskilling, mobility, and mentorship—shows your people that you’re not just investing in AI, you’re investing in them.
And above all, people need transparency. Teams don’t expect perfection. But they do need clarity. They need to understand what’s changing, why it matters, and how they’ll be supported through it. That kind of trust-building communication is the foundation for any successful change.
These shifts may play out differently across sectors—but the core leadership questions will likely be similar.
AI marks a turning point—not just for technology, but for how we prepare our people to lead through disruption and shape the future of learning.
Who is leading the pack? Who is setting themselves apart here in the mid-year?
Are they an LMS? LMS/LXP? Talent Development System? Mentoring? Learning Platform?
Something else?
Are they solely customer training/education, mentoring, or coaching? Are they focused only on employees? Are they an amalgamation of all or some?
Well, they cut across the board – hence, they slide under the “Learning Systems” umbrella, which is under the bigger umbrella term – “Learning Technology.”
…
Categories: L&D-specific, Combo (L&D and Training, think internal/external audiences), and Customer Training/Education (this means customer education, which some vendors use to mean the same as customer training).
Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself.
Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.
First, the easy stuff.
Which AI to Use For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.
This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.
One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.
It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.
…
The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:
What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.
On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.
Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.
Of course, that would be a mistake.
We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.
By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.
Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.
The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.
The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.
Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing
Highlights:
Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.