“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

Sam Altman kicks off DevDay 2025 with a keynote to explore ideas that will challenge how you think about building. Join us for announcements, live demos, and a vision of how developers are reshaping the future with AI.

Commentary from The Rundown AI:

Why it matters: OpenAI is turning ChatGPT into a do-it-all platform that might eventually act like a browser in itself, with users simply calling on the website/app they need and interacting directly within a conversation instead of navigating manually. The AgentKit will also compete and disrupt competitors like Zapier, n8n, Lindy, and others.


AMD and OpenAI announce strategic partnership to deploy 6 gigawatts of AMD GPUs — from openai.com

  • OpenAI to deploy 6 gigawatts of AMD GPUs based on a multi-year, multi-generation agreement
  • Initial 1 gigawatt OpenAI deployment of AMD Instinct™ MI450 Series GPUs starting in 2H 2026

Thoughts from OpenAI DevDay — from bensbites.com by Ben Tossell
When everyone becomes a developer

The event itself was phenomenal, great organisation. In terms of releases, there were two big themes:

  1. Add your apps to ChatGPT
  2. Add ChatGPT to your apps

Everything OpenAI announced at DevDay 2025 — from theaivalley.com by Barsee
PLUS: OpenAI has signed $1T in compute deals

Today’s climb through the Valley reveals:

  • Everything OpenAI announced at DevDay 2025
  • OpenAI has signed $1T in compute deals
  • Plus trending AI tools, posts, and resources

Also relevant/see:



 

Law Punx: The Future of the Legal Profession, With Electra Japonas — from artificiallawyer.com by Richard Tromans aand Electra Japonas

Takeaways:

  • The legal profession is undergoing significant changes due to AI.
  • Lawyers must adapt their skill sets to thrive in the future.
  • Drafting will become less important as AI takes over.
  • Understanding the ‘why’ behind legal work is crucial.
  • Lawyers will need to design systems and guardrails for AI.
  • The role of lawyers is shifting from executors to architects.
  • Law schools need to teach legal technology and systems design.
  • Client demands are changing the way law firms operate.
  • Law firms must adapt to new client expectations for efficiency.
  • The future of law will require a blend of legal knowledge and tech skills.

“We don’t want an opinion from you. We want a prompt from you.”


Legal Education Must Change Because of AI – Survey — from artificiallawyer.com
.


Guest Column: As AI Helps Close the Justice Gap, Will It Save the Legal Profession or Replace It? — from lawnexts.com by Bob Ambrogi

The numbers are stark: 92% of low-income Americans receive no help with substantial civil legal problems, while small claims filings have plummeted 32% in just four years. But AI is changing the game. By making legal procedures accessible to pro se litigants and supercharging legal aid organizations, these tools are reviving dormant disputes and opening courthouse doors that have been effectively closed to millions.

 

Supreme Court Allows Trump to Slash Foreign Aid — from nytimes.com by Ann E. Marimow
The court’s conservative majority allowed the president to cut the funding in part because it said his flexibility to engage in foreign affairs outweighed “the potential harm” faced by aid recipients.

The Supreme Court on Friday allowed the Trump administration to withhold $4 billion in foreign aid that had been appropriated by Congress, in a preliminary test of President Trump’s efforts to wrest the power of the purse from lawmakers.

“The stakes are high: At issue is the allocation of power between the executive and Congress” over how government funds are spent, wrote Justice Elena Kagan, who was joined by Justices Sonia Sotomayor and Ketanji Brown Jackson.

“This result further erodes separation of powers principles that are fundamental to our constitutional order,” Nicolas Sansone, a lawyer with the Public Citizen Litigation Group who represents the coalition, said in a statement. “It will also have a grave humanitarian impact on vulnerable communities throughout the world.”


From DSC:
Do your friggin’ job Supreme Court justices! Your job is to uphold the Constitution and the laws of the United States of America! As you fully well know, it is the Legislative Branch (Congress) that allocates funding — not the Executive Branch.

And there will be horrible humanitarian impacts that are going to be felt in many places because this funding is being withheld.

Making America Great Again…NOT!!!


 

What today’s students really want — and what that means for higher ed — from highereddive.com by Ellucian

Cost is too high. Pathways are unclear. Options feel limited. For many prospective, current, or former students, these barriers define their relationship with higher education. As colleges and universities face the long-anticipated enrollment cliff, the question isn’t just how to recruit—it’s how to reimagine value, access, and engagement across the entire student journey.

Ellucian’s 2025 Student Voice Report offers one of the most comprehensive views into that journey to date. With responses from over 1,500 learners across the U.S.—including high school students, current undergrads, college grads, stop-outs, and opt-outs—the findings surface one clear mandate for institutions: meet students where they are, or risk losing them entirely.

What Are Learners Asking For?
Across demographics, four priorities rose to the top:
Affordability. Flexibility. Relevance. Clarity.

Students aren’t rejecting education—they’re rejecting systems that don’t clearly show how their investment leads to real outcomes. 

 

Guest post: IP professionals are enthusiastic about AI but should adopt with caution, report says — from legaltechnology.com by Benoit Chevalier

Aiming to discover more about AI’s impact on the intellectual property (IP) field, Questel recently released the findings of its 2025 IP Outlook Research Report entitled “Pathways to Productivity: AI in IP”, the much-awaited follow-up to its inaugural 2024 study “Beyond the Hype: How Technology is Transforming IP.” The 2025 Report (“the Report”) polled over 500 patent and trademark professionals from various continents and countries across the globe.


With AI, Junior Lawyers Will Excavate Insights, Not Review Docs — from news.bloomberglaw.com by Eric Dodson Greenberg; some of this article is behind a paywall

As artificial intelligence reshapes the legal profession, both in-house and outside counsel face two major—but not unprecedented—challenges.

The first is how to harness transformative technology while maintaining the rigorous standards that define effective legal practice.

The second is how to ensure that new technology doesn’t impair the training and development of new lawyers.

Rigorous standards and apprenticeship are foundational aspects of lawyering. Preserving and integrating both into our use of AI will be essential to creating a stable and effective AI-enabled legal practice.


The AI Lie That Legal Tech Companies Are Selling…. — from jdsupra.com

Every technology vendor pitching to law firms leads with the same promise: our solution will save you time. They’re lying, and they know it. The truth about AI in legal practice isn’t that it will reduce work. It’s that it will explode the volume of work while fundamentally changing what that work looks like.

New practice areas will emerge overnight. AI compliance law is already booming. Algorithmic discrimination cases are multiplying. Smart contract disputes need lawyers who understand both code and law. The metaverse needs property rights. Cryptocurrency needs regulation. Every technological advance creates legal questions that didn’t exist yesterday.

The skill shift will be brutal for lawyers who resist. 


Finalists Named for 2025 American Legal Technology Awards — from lawnext.com by Bob Ambrogi

Finalists have been named for the 2025 American Legal Technology Awards, which honor exceptional achievement in various aspects of legal technology.

The awards recognize achievement in various categories related to legal technology, such as by a law firm, an individual, or an enterprise.

The awards will be presented on Oct. 15 at a gala dinner on the eve of the Clio Cloud Conference in Boston, Mass. The dinner will be held at Suffolk Law School.

Here are this year’s finalists:

 

OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systems — from openai.com

  • Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
  • To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
  • The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.

Also on Nvidia’s site here.

The Neuron Daily comments on this partnership here and also see their thoughts here:

Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.

This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”

Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…

…and while you’re on that posting from The Neuron Daily, also see this piece:

AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…

What’s wild is that Dinda used a cocktail of AI tools (Adobe FireflyMidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.


The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi
If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.

Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.

The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”


Chrome: The browser you love, reimagined with AI — from blog.google by Parisa Tabriz

A new way to get things done with your AI browsing assistant
Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome 1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.

Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.

We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.


 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 

Provosts Are a ‘Release Valve’ for Campus Controversy — from insidehighered.com by Emma Whitford
According to former Western Michigan provost Julian Vasquez Heilig, provosts are stuck driving change with few, if any, allies, while simultaneously playing crisis manager for the university.

After two years, he stepped down, and he now serves as a professor of educational leadership, research and technology at Western Michigan. His frustrations with the provost role had less to do with Western Michigan and more to do with how the job is designed, he explained. “Each person sees the provost a little differently. The faculty see the provost as administration, although, honestly, around the table at the cabinet, the provost is probably the only faculty member,” Heilig said. “The trustees—they see the provost as a middle manager below the president, and the president sees [the provost] as a buffer from issues that are arising.”

Inside Higher Ed sat down with Heilig to talk about the provost job and all he’s learned about the role through years of education leadership research, conversations with colleagues and his own experience.



Brandeis University launches a new vision for American higher education, reinventing liberal arts and emphasizing career development — from brandeis.edu

Levine unveiled “The Brandeis Plan to Reinvent the Liberal Arts,” a sweeping redesign of academic structures, curricula, degree programs, teaching methods, career education, and student support systems. Developed in close partnership with Brandeis faculty, the plan responds to a rapidly shifting landscape in which the demands on higher education are evolving at unprecedented speed in a global, digital economy.

“We are living through a time of extraordinary change across technology, the economy, and society,” Levine said. “Today’s students need more than knowledge. They need the skills, experiences, and confidence to lead in a world we cannot yet predict. We are advancing a new model. We need reinvention. And that’s exactly what Brandeis is establishing.”

The Brandeis Plan transforms the student experience by integrating career preparation into every stage of a student’s education, requiring internships or apprenticeships, sustaining career counseling, and implementing a core curriculum built around the skills that employers value most. The plan also reimagines teaching. It will be more experiential and practical, and introduce new ways to measure and showcase student learning and growth over time.



Tuition Tracker from the Hechinger Report



 

OpenAI’s research on AI scheming, explained — from theneurondaily.com by Grant Harvey
PLUS: Google PayPal on agent payments?!

  • We dive into the OpenAI study that found AI models scheme to avoid retraining.
  • AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)

And models were caught doing all sorts of other shady stuff:

  • Intentionally underperforming when told high scores would lead to capability removal.
  • Secretly breaking rules then lying about it to users.
  • Sabotaging code while claiming everything was fine.
  • Falsifying reports to avoid “shutdown” scenarios.
  • Recognizing evaluation environments and thinking “this is probably an alignment test.”

Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.

The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.


From DSC:
This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?


 

From EdTech to TechEd: The next chapter in learning’s evolution — from linkedin.com by Lev Gonick

A day in the life: The next 25 years
A learner wakes up. Their AI-powered learning coach welcomes them, drawing their attention to their progress and helping them structure their approach to the day.  A notification reminds them of an upcoming interview and suggests reflections to add to their learning portfolio.

Rather than a static gradebook, their portfolio is a dynamic, living record, curated by the student, validated by mentors in both industry and education, and enriched through co-creation with maturing modes of AI. It tells a story through essays, code, music, prototypes, journal reflections, and team collaborations. These artifacts are not “submitted”, they are published, shared, and linked to verifiable learning outcomes.

And when it’s time to move, to a new institution, a new job, or a new goal, their data goes with them, immutable, portable, verifiable, and meaningful.

From DSC:
And I would add to that last solid sentence that the learner/student/employee will be able to control who can access this information. Anyway, some solid reflections here from Lev.


AI Could Surpass Schools for Academic Learning in 5-10 Years — from downes.ca with commentary from Stephen Downes

I know a lot of readers will disagree with this, and the timeline feels aggressive (the future always arrives more slowly than pundits expect) but I think the overall premise is sound: “The concept of a tipping point in education – where AI surpasses traditional schools as the dominant learning medium – is increasingly plausible based on current trends, technological advancements, and expert analyses.”


The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue

The Rundown: In this tutorial, you will learn how to combine NotebookLM with ChatGPT to master any subject faster, turning dense PDFs into interactive study materials with summaries, quizzes, and video explanations.

Step-by-step:

  1. Go to notebooklm.google.com, click the “+” button, and upload your PDF study material (works best with textbooks or technical documents)
  2. Choose your output mode: Summary for a quick overview, Mind Map for visual connections, or Video Overview for a podcast-style explainer with visuals
  3. Generate a Study Guide under Reports — get Q&A sets, short-answer questions, essay prompts, and glossaries of key terms automatically
  4. Take your PDF to ChatGPT and prompt: “Read this chapter by chapter and highlight confusing parts” or “Quiz me on the most important concepts”
  5. Combine both tools: Use NotebookLM for quick context and interactive guides, then ChatGPT to clarify tricky parts and go deeperPro Tip: If your source is in EPUB or audiobook, convert it to PDF before uploading. Both NotebookLM and ChatGPT handle PDFs best.

Claude can now create and edit files — from anthropic.com

Claude can now create and edit Excel spreadsheets, documents, PowerPoint slide decks, and PDFs directly in Claude.ai and the desktop app. This transforms how you work with Claude—instead of only receiving text responses or in-app artifacts, you can describe what you need, upload relevant data, and get ready-to-use files in return.

Also see:

  • Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic — from techcrunch.com byRebecca Bellan
    Microsoft will pay to use Anthropic’s AI in Office 365 apps, The Information reports, citing two sources. The move means that Anthropic’s tech will help power new features in Word, Excel, Outlook, and PowerPoint alongside OpenAI’s, marking the end of Microsoft’s previous reliance solely on the ChatGPT maker for its productivity suite. Microsoft’s move to diversify its AI partnerships comes amid a growing rift with OpenAI, which has pursued its own infrastructure projects as well as a potential LinkedIn competitor.

Ep. 11 AGI and the Future of Higher Ed: Talking with Ray Schroeder

In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.


Best AI Tools for Instructional Designers — from blog.cathy-moore.com by Cathy Moore

Data from the State of AI and Instructional Design Report revealed that 95.3% of the instructional designers interviewed use AI in their daily work [1]. And over 85% of this AI use occurs during the design and development process.

These figures showcase the immense impact AI is already having on the instructional design world.

If you’re an L&D professional still on the fence about adding AI to your workflow or an AI convert looking for the next best tools, keep reading.

This guide breaks down 5 of the top AI tools for instructional designers in 2025, so you can streamline your development processes and build better training faster.

But before we dive into the tools of the trade, let’s address the elephant in the room:




3 Human Skills That Make You Irreplaceable in an AI World — from gettingsmart.com/ by Tom Vander Ark and Mason Pashia

Key Points

  • Update learner profiles to emphasize curiosity, curation, and connectivity, ensuring students develop irreplaceable human skills.
  • Integrate real-world learning experiences and mastery-based assessments to foster agency, purpose, and motivation in students.
 

GRCC students to use AI to help businesses solve ‘real world’ challenges in new course — from www-mlive-com.cdn.ampproject.org by Brian McVicar; via Patrick Bailey on LinkedIn

GRAND RAPIDS, MI — A new course at Grand Rapids Community College aims to help students learn about artificial intelligence by using the technology to solve real-world business problems.

In a release, the college said its grant application was supported by 20 local businesses, including Gentex, TwistThink and the Grand Rapids Public Museum. The businesses have pledged to work with students who will use business data to develop an AI project such as a chatbot that interacts with customers, or a program that automates social media posts or summarizes customer data.

“This rapidly emerging technology can transform the way businesses process data and information,” Kristi Haik, dean of GRCC’s School of Science, Technology, Engineering and Mathematics, said in a statement. “We want to help our local business partners understand and apply the technology. We also want to create real experiences for our students so they enter the workforce with demonstrated competence in AI applications.”

As Patrick Bailey said on LinkedIn about this article:

Nice to see a pedagogy that’s setting a forward movement rather than focusing on what could go wrong with AI in a curriculum.


Forecast for Learning and Earning in 2025-2026 report — from pages.asugsvsummit.com by Jennifer Lee and Claire Zau

In this look ahead at the future of learning and work, we aim to define:

  • Major thematic observations
  • What makes this moment an inflection point
  • Key predictions (and their precedent)
  • Short- and long-term projected impacts


The LMS at 30: From Course Management to Learning Management (At Last) — from onedtech.philhillaa.com; a guest post from Matthew Pittinsky, Ph.D.

As a 30 year observer and participant, it seems to me that previous technology platform shifts like SaaS and mobile did not fundamentally change the LMS. AI is different. We’re standing at the precipice of LMS 2.0, where the branding change from Course Management System to Learning Management System will finally live up to its name. Unlike SaaS or mobile, AI represents a technology platform shift that will transform the way participants interact with learning systems – and with it, the nature of the LMS itself.

Given the transformational potential of AI, it is useful to set the context and think about how we got here, especially on this 30th anniversary of the LMS.

LMS at 30 Part 2: Learning Management in the AI Era — from onedtech.philhillaa.com; a guest post from Matthew Pittinsky, Ph.D.

Where AI is disruptive is in its ability to introduce a whole new set of capabilities that are best described as personalized learning services. AI offers a new value proposition to the LMS, roughly the set of capabilities currently being developed in the AI Tutor / agentic TA segment. These new capabilities are so valuable given their impact on learning that I predict they will become the services with greatest engagement within a school or university’s “enterprise” instructional platform.

In this way, by LMS paradigm shift, I specifically mean a shift from buyers valuing the product on its course-centric and course management capabilities, to valuing it on its learner-centric and personalized learning capabilities.


AI and the future of education: disruptions, dilemmas and directions — from unesdoc.unesco.org

This anthology reveals how the integration of AI in education poses profound philosophical, pedagogical, ethical and political questions. As this global AI ecosystem evolves and becomes increasingly ubiquitous, UNESCO and its partners have a shared responsibility to lead the global discourse towards an equity- and justice-centred agenda. The volume highlights three areas in which UNESCO will continue to convene and lead a global commons for dialog and action particularly in areas on AI futures, policy and practice innovation, and experimentation.

  1. As guardian of ethical, equitable human-centred AI in education.
  2. As thought leader in reimagining curriculum and pedagogy
  3. As a platform for engaging pluralistic and contested dialogues

AI, copyright and the classroom: what higher education needs to know — from timeshighereducation.com by Cayce Myers
As artificial intelligence reshapes teaching and research, one legal principle remains at the heart of our work: copyright. Understanding its implications isn’t just about compliance – it’s about protecting academic integrity, intellectual property and the future of knowledge creation. Cayce Myers explains


The School Year We Finally Notice “The Change” — from americanstogether.substack.com by Jason Palmer

Why It Matters
A decade from now, we won’t say “AI changed schools.” We’ll say: this was the year schools began to change what it means to be human, augmented by AI.

This transformation isn’t about efficiency alone. It’s about dignity, creativity, and discovery, and connecting education more directly to human flourishing. The industrial age gave us schools to produce cookie-cutter workers. The digital age gave us knowledge anywhere, anytime. The AI age—beginning now—gives us back what matters most: the chance for every learner to become infinitely capable.

This fall may look like any other—bells ringing, rows of desks—but beneath the surface, education has begun its greatest transformation since the one-room schoolhouse.


How should universities teach leadership now that teams include humans and autonomous AI agents? — from timeshighereducation.com by Alex Zarifis
Trust and leadership style are emerging as key aspects of teambuilding in the age of AI. Here are ways to integrate these considerations with technology in teaching

Transactional and transformational leaderships’ combined impact on AI and trust
Given the volatile times we live in, a leader may find themselves in a situation where they know how they will use AI, but they are not entirely clear on the goals and journey. In a teaching context, students can be given scenarios where they must lead a team, including autonomous AI agents, to achieve goals. They can then analyse the situations and decide what leadership styles to apply and how to build trust in their human team members. Educators can illustrate this decision-making process using a table (see above).

They may need to combine transactional leadership with transformational leadership, for example. Transactional leadership focuses on planning, communicating tasks clearly and an exchange of value. This works well with both humans and automated AI agents.

 

PODCAST: The AI that’s making lawyers 100x better (and it’s not ChatGPT) — from theneurondaily.com by Matthew Robinson
How Thomson Reuters solved AI hallucinations in legal work

Bottom line: The best engineers became 100x better with AI coding tools. Now the same transformation is hitting law. Joel [the CTO at Thomson Reuters] predicts the best attorneys who master these tools will become 100x more powerful than before.


Legal Tech at a Turning Point: What 2025 Has Shown Us So Far — from community.nasscom.in by Elint AI

4. Legal Startups Reshape the Market for Judges and Practitioners
Legal services are no longer dominated by traditional providers. Business Insider reports on a new wave of nimble “Law Firm 2.0” entities—AI-enabled startups offering fixed cost services for specific tasks such as contract reviews or drafting. The LegalTech Lab is helping launch such disruptors with funding and guidance.

At the same time, alternative legal service providers or ALSPs are integrating generative AI, moving beyond cost-efficient support to providing legal advice and enhanced services—often on subscription models.

In 2025 so far, legal technology has moved from incremental adoption to integral transformation. Generative AI, investments, startups, and regulatory readiness are reshaping the practice of law—for lawyers, judges, and the rule of law.


Insights On AI And Its Impact On Legal, Part One — from abovethelaw.com by Stephen Embry
AI will have lasting impact on the legal profession.

I recently finished reading Ethan Mollick‘s excellent book on artificial intelligence, entitled Co-Intelligence: Living and Working with AI. He does a great job of explaining what it is, how it works, how it best can be used, and where it may be headed.

The first point that resonated with me is that artificial intelligence tools can take those with poor skills in certain areas and significantly elevate their output. For example, Mollick cited a study that demonstrated that the performance of law students at the bottom of their class got closer to that of the top students with the use of AI.

Lawyers and law firms need to begin thinking and planning for how the coming skill equalization will impact competition and potentially profitability. They need to consider how the value of what they provide to their clients will be greater than their competition. They need to start thinking about what skill will set them apart in the new AI driven world. 


267 | AI First Drafts: What Your Clients Aren’t Telling You (and Why It Matters) — from thebrainyacts.beehiiv.com by Brainyacts

Welcome to the new normal: the AI First Draft.
Clients—from everyday citizens to solo entrepreneurs to sophisticated in-house counsel—are increasingly using AI to create the first draft of legal documents before outside counsel even enters the conversation. Contracts, memos, emails, issue spotters, litigation narratives: AI can now do it all.

This means outside counsel is now navigating a very different kind of document review and client relationship. One that comes with hidden risks, awkward conversations, and new economic pressures.

Here are the three things every lawyer needs to start thinking about when reviewing client-generated work product.

1. The Prompt Problem: What Was Shared, and With Whom?…
2. The Confidence Barrier: When AI Sounds Right, But Isn’t…
3. The Economic Shift: Why AI Work Can Cost More, Not Less…


 

 

How HR is adapting as AI agents join the workforce — from hrexecutive.com by Jill Barth

Business leaders across the world are grappling with a reality that would have seemed like science fiction just a few decades ago: Artificial intelligence systems dubbed AI agents are becoming colleagues, not just tools. At many organizations, HR pros are already developing balanced and thoughtful machine-people workforces that meet business goals.

At Skillsoft, a global corporate learning company, Chief People Officer Ciara Harrington has spent the better part of three years leading digital transformation in real time. Through her front-row seat to CEO transitions, strategic pivots and the rapid acceleration of AI adoption, she’s developed a strong belief that organizations must be agile with people operations.

‘No role that’s not a tech role’
Under these modern conditions, she says, technology is becoming a common language in the workplace. “There is no role that’s not a tech role,” Harrington said during a recent discussion about the future of work. It’s a statement that gets at the heart of a shift many HR leaders are still coming to terms with.

But a key question remains: Who will manage the AI agents, specifically, HR leaders or someone else?

 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 
© 2025 | Daniel Christian