From Content To Capability: How AI Agents Are Redefining Workplace Learning — from forbes.com by Nelson Sivalingam

Real, capability-building learning requires three key elements: content, context and conversation. 

The Rise Of AI Agents: Teaching At Scale
The generative AI revolution is often framed in terms of efficiency: faster content creation, automated processes and streamlined workflows. But in the world of L&D, its most transformative potential lies elsewhere: the ability to scale great teaching.

AI gives us the means to replicate the role of an effective teacher across an entire organization. Specifically, AI agents—purpose-built systems that understand, adapt and interact in meaningful, context-aware ways—can make this possible. These tools understand a learner’s role, skill level and goals, then tailor guidance to their specific challenges and adapt dynamically over time. They also reinforce learning continuously, nudging progress and supporting application in the flow of work.

More than simply sharing knowledge, an AI agent can help learners apply it and improve with every interaction. For example, a sales manager can use a learning agent to simulate tough customer scenarios, receive instant feedback based on company best practices and reinforce key techniques. A new hire in the product department could get guidance on the features and on how to communicate value clearly in a roadmap meeting.

In short, AI agents bring together the three essential elements of capability building, not in a one-size-fits-all curriculum but on demand and personalized for every learner. While, obviously, this technology shouldn’t replace human expertise, it can be an effective tool for removing bottlenecks and unlocking effective learning at scale.

 

A Journalist’s Toolkit for the AI Era — from wondertools.substack.com by Jeremy Caplan and Joe Amditis
A guest expert shares his practical tools

As news organizations scramble to update their digital toolkits, I invited one of the most tech savvy journalism advisors I know to share his guidance.

In the guest post below, Joe Amditis shares a bunch of useful resources. A former CUNY student of mine, Joe now serves as associate director of operations at the Center for Cooperative Media at Montclair State University.

 

PODCAST: The AI that’s making lawyers 100x better (and it’s not ChatGPT) — from theneurondaily.com by Matthew Robinson
How Thomson Reuters solved AI hallucinations in legal work

Bottom line: The best engineers became 100x better with AI coding tools. Now the same transformation is hitting law. Joel [the CTO at Thomson Reuters] predicts the best attorneys who master these tools will become 100x more powerful than before.


Legal Tech at a Turning Point: What 2025 Has Shown Us So Far — from community.nasscom.in by Elint AI

4. Legal Startups Reshape the Market for Judges and Practitioners
Legal services are no longer dominated by traditional providers. Business Insider reports on a new wave of nimble “Law Firm 2.0” entities—AI-enabled startups offering fixed cost services for specific tasks such as contract reviews or drafting. The LegalTech Lab is helping launch such disruptors with funding and guidance.

At the same time, alternative legal service providers or ALSPs are integrating generative AI, moving beyond cost-efficient support to providing legal advice and enhanced services—often on subscription models.

In 2025 so far, legal technology has moved from incremental adoption to integral transformation. Generative AI, investments, startups, and regulatory readiness are reshaping the practice of law—for lawyers, judges, and the rule of law.


Insights On AI And Its Impact On Legal, Part One — from abovethelaw.com by Stephen Embry
AI will have lasting impact on the legal profession.

I recently finished reading Ethan Mollick‘s excellent book on artificial intelligence, entitled Co-Intelligence: Living and Working with AI. He does a great job of explaining what it is, how it works, how it best can be used, and where it may be headed.

The first point that resonated with me is that artificial intelligence tools can take those with poor skills in certain areas and significantly elevate their output. For example, Mollick cited a study that demonstrated that the performance of law students at the bottom of their class got closer to that of the top students with the use of AI.

Lawyers and law firms need to begin thinking and planning for how the coming skill equalization will impact competition and potentially profitability. They need to consider how the value of what they provide to their clients will be greater than their competition. They need to start thinking about what skill will set them apart in the new AI driven world. 


267 | AI First Drafts: What Your Clients Aren’t Telling You (and Why It Matters) — from thebrainyacts.beehiiv.com by Brainyacts

Welcome to the new normal: the AI First Draft.
Clients—from everyday citizens to solo entrepreneurs to sophisticated in-house counsel—are increasingly using AI to create the first draft of legal documents before outside counsel even enters the conversation. Contracts, memos, emails, issue spotters, litigation narratives: AI can now do it all.

This means outside counsel is now navigating a very different kind of document review and client relationship. One that comes with hidden risks, awkward conversations, and new economic pressures.

Here are the three things every lawyer needs to start thinking about when reviewing client-generated work product.

1. The Prompt Problem: What Was Shared, and With Whom?…
2. The Confidence Barrier: When AI Sounds Right, But Isn’t…
3. The Economic Shift: Why AI Work Can Cost More, Not Less…


 

 

Midoo AI Launches the World’s First AI Language Learning Agent, Redefining How People Learn Languages — from morningstar.com

SINGAPORE Sept. 3, 2025  /PRNewswire/ — Today, Midoo AI proudly announces the launch of the world’s first AI language learning agent, a groundbreaking innovation set to transform language education forever.

For decades, language learning has pursued one ultimate goal: true personalization. Traditional tools offered smart recommendations, gamified challenges, and pre-written role-play scripts—but real personalization remained out of reach. Midoo AI changes that. Here is the >launch video of Midoo AI.

Imagine a learning experience that evolves with you in real time. A system that doesn’t rely on static courses or scripts but creates a dynamic, one-of-a-kind language world tailored entirely to your needs. This is the power of Midoo’s Dynamic Generation technology.

“Midoo is not just a language-learning tool,” said Yvonne, co-founder of Midoo AI. “It’s a living agent that senses your needs, adapts instantly, and shapes an experience that’s warm, personal, and alive. Learning is no longer one-size-fits-all—now, it’s yours and yours alone.”


Midoo AI Review: Meet the First AI Language Learning Agent — from autogpt.net

Language learning apps have traditionally focused on exercises, quizzes, and progress tracking. Midoo AI introduces a different approach. Instead of presenting itself as a course provider, it acts as an intelligent learning agent that builds, adapts, and sustains a learner’s journey.

This review examines how Midoo AI operates, its feature set, and what makes it distinct from other AI-powered tutors.

Midoo AI in Context: Purpose and Position
Midoo AI is not structured around distributing lessons or modules. Its core purpose is to provide an agent-like partner that adapts in real time. Where many platforms ask learners to select a “level” or “topic,”

Midoo instead begins by analyzing goals, usage context, and error patterns. The result is less about consuming predesigned units and more about co-constructing a pathway.


AI Isn’t Replacing Teachers — It’s Helping Us Teach Better — from rdene915.com by guest author Matthew Mawn

Turning Time Saved Into Better Learning
AI can save teachers time, but what can that time be used for (besides taking a breath)? For most of us, it means redirecting energy into the parts of teaching that made us want to pursue this profession in the first place: connecting with our students and helping them grow academically.

Differentiation
Every classroom has students with different readiness levels, language needs, and learning preferences. AI tools like Diffit or MagicSchool can instantly create multiple versions of a passage or assignment, differentiated by grade level, complexity, or language. This allows every student to engage with the same core concept, moving together as one cohesive class. Instead of spending an evening retyping and rephrasing, teachers can review and tweak AI drafts in minutes, ready for the next lesson.


Mass Intelligence — from oneusefulthing.org by Ethan Mollick
From GPT-5 to nano banana: everyone is getting access to powerful AI

When a billion people have access to advanced AI, we’ve entered what we might call the era of Mass Intelligence. Every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive. Now every profession, every institution, every community has to figure out how to thrive with Mass Intelligence. How do we harness a billion people using AI while managing the chaos that comes with it? How do we rebuild trust when anyone can fabricate anything? How do we preserve what’s valuable about human expertise while democratizing access to knowledge?


AI Is the Cognitive Layer. Schools Still Think It’s a Study Tool. — from stefanbauschard.substack.com by Stefan Bauschard

By the time today’s 9th graders and college freshman enter the workforce, the most disruptive waves of AGI and robotics may already be embedded into part society.

What replaces the old system will not simply be a more digital version of the same thing. Structurally, schools may move away from rigid age-groupings, fixed schedules, and subject silos. Instead, learning could become more fluid, personalized, and interdisciplinary—organized around problems, projects, and human development rather than discrete facts or standardized assessments.

AI tutors and mentors will allow for pacing that adapts to each student, freeing teachers to focus more on guidance, relationships, and high-level facilitation. Classrooms may feel less like miniature factories and more like collaborative studios, labs, or even homes—spaces for exploring meaning and building capacity, not just delivering content.

If students are no longer the default source of action, then we need to teach them to:

    • Design agents,
    • Collaborate with agents,
    • Align agentic systems with human values,
    • And most of all, retain moral and civic agency in a world where machines act on our behalf.

We are no longer educating students to be just doers.
We must now educate them to be judgesdesigners, and stewards of agency.


Meet Your New AI Tutor — from wondertools.substack.com by Jeremy Caplan
Try new learning modes in ChatGPT, Claude, and Gemini

AI assistants are now more than simple answer machines. ChatGPT’s new Study Mode, Claude’s Learning Mode, and Gemini’s Guided Learning represent a significant shift. Instead of just providing answers, these free tools act as adaptive, 24/7 personal tutors.



AI Tools for Instructional Design (September, 2025) — from drphilh.gumroad.com by Dr Philippa Hardman

That’s why, in preparation for my next bootcamp which kicks off September 8th 2025, I’ve just completed a full refresh of my list of the most powerful & popular AI tools for Instructional Designers, complete with tips on how to get the most from each tool.

The list has been created using my own experience + the experience of hundreds of Instructional Designers who I work with every week.

It contains the 50 most powerful AI tools for instructional design available right now, along with tips on how to optimise their benefits while mitigating their risks.


Addendums on 9/4/25:


AI Companies Roll Out Educational Tools — from insidehighered.com by Ray Schroeder
This fall, Google, Anthropic and OpenAI are rolling out powerful new AI tools for students and educators, each taking a different path to shape the future of learning.



Rethinking My List of Essential Job Skills in the Age of AI — from michellekassorla.substack.com by Michelle Kassorla

So here’s the new list of essential skills I think my students will need when they are employed to work with AI five years from now:

  1. They can follow directions, analyze outcomes, and adapt to change when needed.
  2. They can write or edit AI to capture a unique voice and appropriate tone in sync with an audience’s needs
  3. They have a deep understanding of one or more content areas of a particular profession, business, or industry, so they can easily identify factual errors.
  4. They have a strong commitment to exploration, a flexible mindset, and a broad understanding of AI literacy.
  5. They are resilient and critical thinkers, ready to question results and demand better answers.
  6. They are problem solvers.

And, of course, here is a new rubric built on those skills:


 

Anthropic Education Report: How educators use Claude — from anthropic.com

We find that:

Educators use AI in and out of the classroom
Educators’ uses range from developing course materials and writing grant proposals to academic advising and managing administrative tasks like admissions and financial planning.

Educators aren’t just using chatbots; they’re building their own custom tools with AI
Faculty are using Claude Artifacts to create interactive educational materials, such as chemistry simulations, automated grading rubrics, and data visualization dashboards.

Educators tend to automate the drudgery while staying in the loop for everything else
Tasks requiring significant context, creativity, or direct student interaction—like designing lessons, advising students, and writing grant proposals—are where educators are more likely to use AI as an enhancement. In contrast, routine administrative work such as financial management and record-keeping are more automation-heavy.

Some educators are automating grading; others are deeply opposed
In our Claude.ai data, faculty used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way (where the AI directly performs the task). That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective.

 

Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case

What Happened?

Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.

On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.

The Numbers Tell a Disturbing Story

  • Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
  • ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
  • ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
  • Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails

Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.

Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.


Also related, see the following GIFTED article:


A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.

 

There Is Now Clearer Evidence AI Is Wrecking Young Americans’ Job Prospects — from wsj.com by Justin Lahart; this article is behind a paywall
Young workers face rising AI competition in fields like software development, but some also benefit from AI as a helper, new research shows

Young workers are getting hit in fields where generative-AI tools such as ChatGPT can most easily automate tasks done by humans, such as software development, according to a paper released Tuesday by three Stanford University economists. They crunched anonymized data on millions of employees at tens of thousands of firms, including detailed information on workers’ ages and jobs, making this one of clearest indicators yet of AI’s disruptive impact.

Young workers in jobs where AI could act as a helper, rather than a replacement, actually saw employment growth, economists found.

 
 
 

The future of L&D is here, and it’s powered by AI. — from linkedin.com by Josh Cavalier


4 Ways I Use AI to Think Better — from wondertools.substack.com by Jeremy Caplan
How AI helps me learn, decide, and create

Learn something new.
Map out a personalized curriculum

Try this: Give an AI assistant context about what you want to learn, why, and how.

  • Detail your rationale and motivation, which may impact your approach.
  • Note your current knowledge or skill level, ideally with examples.

Summarize your learning preferences

  • Note whether you prefer to read, listen to, or watch learning materials.
  • Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
  • If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
  • Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of PerplexityChatGPT, Gemini or Claude. They can also summarize research literature about effective learning tactics.
  • If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.

The Ends of Tests: Possibilities for Transformative Assessment and Learning with Generative AI


GPT-5 for Instructional Designers — from drphilippahardman.substack.com by Dr Philippa Hardman
10 Hacks to Work Smarter & Safer with OpenAI’s Latest Model

The TLDR is that as Instructional Designers, we can’t afford to miss some of the very real benefits of GPT-5’s potential, but we also can’t ensure our professional standards or learner outcomes if we blindly accept its outputs without due testing and validation.

For this reason, I decided to synthesise the latest GPT-5 research—from OpenAI’s technical documentation to independent security audits to real-world user testing—into 10 essential reality checks for using GPT-5 as an Instructional Designer.

These aren’t theoretical exercises; they’re practical tests designed to help you safely unlock GPT-5’s benefits while identifying and mitigating its most well-documented limitations.


Grammarly launches new specialist AI agents providing personalized assistance for students — from edtechinnovationhub.com by Rachel Lawler
Grammarly, an AI communication tool, has announced the launch of eight new specialized AI agents. The new assistants can support specific writing challenges such as finding credible sources and checking originality. 

Students will now be offered “responsible AI support” through Grammarly, with the eight new agents:

  • Reader Reactions agent …
  • AI Grader agent …
  • Citation Finder agent …
  • Expert Review agent …
  • Proofreader agent …
  • AI Detector agent …
  • Plagiarism Checker agent …
  • Paraphraser agent …


Why Perplexity AI Is My Go-To Research Tool as a Higher Education CIO — from mikekentz.substack.com; a guest post from Michael Lyons, CIO at MassBay Community College

While I regularly use tools like ChatGPT, Grammarly, Microsoft Copilot, and even YouTube Premium (I would cancel Netflix before this), Perplexity has earned a top spot in my toolkit. It blends AI and real-time web search into one seamless, research-driven platform that saves time and improves the quality of information I rely on every day.

 

Thomson Reuters CEO: Legal Profession Faces “Biggest Disruption in Its History”from AI  — from lawnext.com by Bob Ambrogi

Thomson Reuters President and CEO Steve Hasker believes the legal profession is experiencing “the biggest disruption … in its history” due to generative and agentic artificial intelligence, fundamentally rewriting how legal work products are created for the first time in more than 300 years.

Speaking to legal technology reporters during ILTACON, the International Legal Technology Association’s annual conference, Hasker outlined his company’s ambitious goal to become “the most innovative company” in the legal tech sector while navigating what he described as unprecedented technological change affecting a profession that has remained largely unchanged since its origins in London tea houses centuries ago.


Legal tech hackathon challenges students to rethink access to justice — from the Centre for Innovation and Entrepreneurship, Auckland Law School
In a 24-hour sprint, student teams designed innovative tools to make legal and social support more accessible.

The winning team comprised of students of computer science, law, psychology and physics. They developed a privacy-first legal assistant powered by AI that helps people understand their legal rights without needing to navigate dense legal language. 


Teaching How To ‘Think Like a Lawyer’ Revisited — from abovethelaw.com by Stephen Embry
GenAI gives the concept of training law students to think like a lawyer a whole new meaning.

Law Schools
These insights have particular urgency for legal education. Indeed, most of Cowen’s criticisms and suggested changes need to be front and center for law school leaders. It’s naïve to think that law student and lawyers aren’t going to use GenAI tools in virtually every aspect of their professional and personal lives. Rather than avoiding the subject or worse yet trying to stop use of these tools, law schools should make GenAI tools a fundamental part of research, writing and drafting training.

They need to focus not on memorization but on the critical thinking skills beginning lawyers used to get in the on-the-job training guild type system. As I discussed, that training came from repetitive and often tedious work that developed experienced lawyers who could recognize patterns and solutions based on the exposure to similar situations. But much of that repetitive and tedious work may go away in a GenAI world.

The Role of Adjunct Professors
But to do this, law schools need to better partner with actual practicing lawyers who can serve as adjunct professors. Law schools need to do away with the notion that adjuncts are second-class teachers.


It’s a New Dawn In Legal Tech: From Woodstock to ILTACON (And Beyond) — from lawnext.com by Bob Ambrogi

As someone who has covered legal tech for 30 years, I cannot remember there ever being a time such as this, when the energy and excitement are raging, driven by generative AI and a new era of innovation and new ideas of the possible.

But this year was different. Wandering the exhibit hall, getting product briefings from vendors, talking with attendees, it was impossible to ignore the fundamental shift happening in legal technology. Gen AI isn’t just creating new products – it is spawning entirely new categories of products that truly are reshaping how legal work gets done.

Agentic AI is the buzzword of 2025 and agentic systems were everywhere at ILTACON, promising to streamline workflows across all areas of legal practice. But, perhaps more importantly, these tools are also beginning to address the business side of running a law practice – from client intake and billing to marketing and practice management. The scope of transformation is now beginning to extend beyond the practice of law into the business of law.

Largely missing from this gathering were solo practitioners, small firm lawyers, legal aid organizations, and access-to-justice advocates – the very people who stand to benefit most from the democratizing potential of AI.

However, now more than ever, the innovations we are seeing in legal tech have the power to level the playing field, to give smaller practitioners access to tools and capabilities that were once prohibitively expensive. If these technologies remain priced for and marketed primarily to Big Law, we will have succeeded only in widening the justice gap rather than closing it.


How AI is Transforming Deposition Review: A LegalTech Q&A — from jdsupra.com

Thanks to breakthroughs in artificial intelligence – particularly in semantic search, multimodal models, and natural language processing – new legaltech solutions are emerging to streamline and accelerate deposition review. What once took hours or days of manual analysis now can be accomplished in minutes, with greater accuracy and efficiency than possible with manual review.


From Skepticism to Trust: A Playbook for AI Change Management in Law Firms — from jdsupra.com by Scott Cohen

Historically, lawyers have been slow adopters of emerging technologies, and with good reason. Legal work is high stakes, deeply rooted in precedent, and built on individual judgment. AI, especially the new generation of agentic AI (systems that not only generate output but initiate tasks, make decisions, and operate semi-autonomously), represents a fundamental shift in how legal work gets done. This shift naturally leads to caution as it challenges long-held assumptions about lawyer workflows and several aspects of their role in the legal process.

The path forward is not to push harder or faster, but smarter. Firms need to take a structured approach that builds trust through transparency, context, training, and measurement of success. This article provides a five-part playbook for law firm leaders navigating AI change management, especially in environments where skepticism is high and reputational risk is even higher.


ILTACON 2025: The vendor briefings – Agents, ecosystems and the next stage of maturity — from legaltechnology.com by Caroline Hill

This year’s ILTACON in Washington was heavy on AI, but the conversation with vendors has shifted. Legal IT Insider’s briefings weren’t about potential use cases or speculative roadmaps. Instead, they focused on how AI is now being embedded into the tools lawyers use every day — and, crucially, how those tools are starting to talk to each other.

Taken together, they point to an inflection point, where agentic workflows, data integration, and open ecosystems define the agenda. But it’s important amidst the latest buzzwords to remember that agents are only as good as the tools they have to work with, and AI only as good as its underlying data. Also, as we talk about autonomous AI, end users are still struggling with cloud implementations and infrastructure challenges, and need vendors to be business partners that help them to make progress at speed.

Harvey’s roadmap is all about expanding its surface area — connecting to systems like iManage, LexisNexis, and more recently publishing giant Wolters Kluwer — so that a lawyer can issue a single query and get synthesised, contextualised answers directly within their workflow. Weinberg said: “What we’re trying to do is get all of the surface area of all of the context that a lawyer needs to complete a task and we’re expanding the product surface so you can enter a search, search all resources, and apply that to the document automatically.” 

The common thread: no one is talking about AI in isolation anymore. It’s about orchestration — pulling together multiple data sources into a workflow that actually reflects how lawyers practice. 


5 Pitfalls Of Delaying Automation In High-Volume Litigation And Claims — from jdsupra.com

Why You Can’t Afford to Wait to Adopt AI Tools that have Plaintiffs Moving Faster than Ever
Just as photocopiers shifted law firm operations in the early 1970s and cloud computing transformed legal document management in the early 2000s, AI automation tools are altering the current legal landscape—enabling litigation teams to instantly structure unstructured data, zero in on key arguments in seconds, and save hundreds (if not thousands) of hours of manual work.


Your Firm’s AI Policy Probably Sucks: Why Law Firms Need Education, Not Rules — from jdsupra.com

The Floor, Not the Ceiling
Smart firms need to flip their entire approach. Instead of dictating which AI tools lawyers must use, leadership should set a floor for acceptable use and then get out of the way.

The floor is simple: no free versions for client work. Free tools are free because users are the product. Client data becomes training data. Confidentiality gets compromised. The firm loses any ability to audit or control how information flows. This isn’t about control; it’s about professional responsibility.

But setting the floor is only the first step. Firms must provide paid, enterprise versions of AI tools that lawyers actually want to use. Not some expensive legal tech platform that promises AI features but delivers complicated workflows. Real AI tools. The same ones lawyers are already using secretly, but with enterprise security, data protection, and proper access controls.

Education must be practical and continuous. Single training sessions don’t work. AI tools evolve weekly. New capabilities emerge constantly. Lawyers need ongoing support to experiment, learn, and share discoveries. This means regular workshops, internal forums for sharing prompts and techniques, and recognition for innovative uses.

The education investment pays off immediately. Lawyers who understand AI use it more effectively. They catch its mistakes. They know when to verify outputs. They develop specialized prompts for legal work. They become force multipliers, not just for themselves but for their entire teams.

 


Back to School in the AI Era: Why Students Are Rewriting Your Lesson Plan — from linkedin.com by Hailey Wilson

As a new academic year begins, many instructors, trainers, and program leaders are bracing for familiar challenges—keeping learners engaged, making complex material accessible, and preparing students for real-world application.

But there’s a quiet shift happening in classrooms and online courses everywhere.

This fall, it’s not the syllabus that’s guiding the learning experience—it’s the conversation between the learner and an AI tool.


From bootcamp to bust: How AI is upending the software development industry — from reuters.com by Anna Tong; via Paul Fain
Coding bootcamps have been a mainstay in Silicon Valley for more than a decade. Now, as AI eliminates the kind of entry-level roles for which they trained people, they’re disappearing.

Coding bootcamps have been a Silicon Valley mainstay for over a decade, offering an important pathway for non-traditional candidates to get six-figure engineering jobs. But coding bootcamp operators, students and investors tell Reuters that this path is rapidly disappearing, thanks in large part to AI.

“Coding bootcamps were already on their way out, but AI has been the nail in the coffin,” said Allison Baum Gates, a general partner at venture capital fund SemperVirens, who was an early employee at bootcamp pioneer General Assembly.

Gates said bootcamps were already in decline due to market saturation, evolving employer demand and market forces like growth in international hiring.

 

Bringing the best of AI to college students for free — from blog.google by Sundar Pichai

Millions of college students around the world are getting ready to start classes. To help make the school year even better, we’re making our most advanced AI tools available to them for free, including our new Guided Learning mode. We’re also providing $1 billion to support AI education and job training programs and research in the U.S. This includes making our AI and career training free for every college student in America through our AI for Education Accelerator — over 100 colleges and universities have already signed up.

Guided Learning: from answers to understanding
AI can broaden knowledge and expand access to it in powerful ways, helping anyone, anywhere learn anything in the way that works best for them. It’s not about just getting an answer, but deepening understanding and building critical thinking skills along the way. That opportunity is why we built Guided Learning, a new mode in Gemini that acts as a learning companion guiding you with questions and step-by-step support instead of just giving you the answer. We worked closely with students, educators, researchers and learning experts to make sure it’s helpful for understanding new concepts and is backed by learning science.




 

21 Ways People Are Using A.I. at Work — from nytimes.com by Larry Buchanan and Francesca Paris; this is a gifted article

  1. Select wines for restaurant menus
  2. Digitize a herbarium
  3. Make everything look better
  4. Create lesson plans that meet educational standards
  5. Make a bibliography
  6. Write up therapy plans
  7. …and many more

The GPT-5 fallout, explained… — from theneurondaily.com by Grant Harvey
PLUS: Who knew ppl loved 4o so much!?

The GPT-5 Backlash, Explained: OpenAI users revolted against GPT-5… then things got weird.
What a vibe shift a day or two makes, huh? As you all know by now, GPT-5 dropped last Thursday, and at first, it seemed like a pretty successful launch.

Early testers loved it. Sam Altman called it “the most powerful AI model ever made.”

Then the floodgates opened to 700 million users.. and all hell broke loose.

Here’s what happened: Within hours, Reddit and Twitter turned into digital pitchforks. The crime? OpenAI had quietly sunset GPT-4o—the model everyone apparently loved more than their morning coffee—without warning. Users weren’t just mad. They were devastated.


ChatGPT Changes — from getsuperintel.com by Kim “Chubby” Isenberg
4o is back, and Plus users get 3000 reasoning requests per week with GPT-5!

Who would have thought that the “smartest model ever” would trigger one of the loudest user revolts in AI history? The return of GPT-4o after only 24 hours shows how attached people are to the personality of their AI—and how quickly trust crumbles when expectations are not met. In this issue, we not only look at OpenAI’s response, but also at how the balance of power between developers and the community is shifting.


GPT-5 doesn’t dislike you—it might just need a benchmark for emotional intelligence — from link.wired.com by
Welcome to another AI Lab!

The backlash over the more emotionally neutral GPT-5 shows that the smartest AI models might have striking reasoning, coding, and math skills, but advancing their psychological intelligence safely remains very much unsolved.

Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.

Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.


ChatGPT is bringing back 4o as an option because people missed it — from theverge.com by Emma Roth
Many ChatGPT users were frustrated by OpenAI’s decision to make GPT-5 the default model.

OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.

“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”

For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.


AI Agent Trends of 2025: A Transformative Landscape — from marktechpost.com by Asif Razzaq

This articles focuses on five core AI agent trends for 2025: Agentic Retrieval-Augmented Generation (RAG), Voice Agents, AI Agent Protocols, DeepResearch Agents, Coding Agents, and Computer Using Agents (CUA).


 

BREAKING: Google introduces Guided Learning — from aieducation.substack.com by Claire Zau
Some thoughts on what could make Google’s AI tutor stand out

Another major AI lab just launched “education mode.”

Google introduced Guided Learning in Gemini, transforming it into a personalized learning companion designed to help you move from quick answers to real understanding.

Instead of immediately spitting out solutions, it:

  • Asks probing, open-ended questions
  • Walks learners through step-by-step reasoning
  • Adapts explanations to the learner’s level
  • Uses visuals, videos, diagrams, and quizzes to reinforce concepts

This Socratic style tutor rollout follows closely behind similar announcements like OpenAI’s Study Mode (last week) and Anthropic’s Claude for Education (April 2025).


How Sci-Fi Taught Me to Embrace AI in My Classroom — from edsurge.com by Dan Clark

I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.

My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.

Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.

How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.


AI Tools for Strategy and Research – GT #32 — from goodtools.substack.com by Robin Good
Getting expert advice, how to do deep research with AI, prompt strategy, comparing different AIs side-by-side, creating mini-apps and an AI Agent that can critically analyze any social media channel

In this issue, discover AI tools for:

  • Getting Expert Advice
  • Doing Deep Research with AI
  • Improving Your AI Prompt Strategy
  • Comparing Results from Different AIs
  • Creating an AI Agent for Social Media Analysis
  • Summarizing YouTube Videos
  • Creating Mini-Apps with AI
  • Tasting an Award-Winning AI Short Film

GPT-Building, Agentic Workflow Design & Intelligent Content Curation — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 3 recent job ads reveal about the changing nature of Instructional Design

In this week’s blog post, I’ll share my take on how the instructional design role is evolving and discuss what this means for our day-to-day work and the key skills it requires.

With this in mind, I’ve been keeping a close eye on open instructional design roles and, in the last 3 months, have noticed the emergence of a new flavour of instructional designer: the so-called “Generative AI Instructional Designer.”

Let’s deep dive into three explicitly AI-focused instructional design positions that have popped up in the last quarter. Each one illuminates a different aspect of how the role is changing—and together, they paint a picture of where our profession is likely heading.

Designers who evolve into prompt engineers, agent builders, and strategic AI advisors will capture the new premium. Those who cling to traditional tool-centric roles may find themselves increasingly sidelined—or automated out of relevance.


Google to Spend $1B on AI Training in Higher Ed — from insidehighered.com by Katherine Knott

Google’s parent company announced Wednesday (8/6/25) that it’s planning to spend $1 billion over the next three years to help colleges teach and train students about artificial intelligence.

Google is joining other AI companies, including OpenAI and Anthropic, in investing in AI training in higher education. All three companies have rolled out new tools aimed at supporting “deeper learning” among students and made their AI platforms available to certain students for free.


5 Predictions for How AI Will Impact Community Colleges — from pistis4edu.substack.com by Feng Hou

Based on current technology capabilities, adoption patterns, and the mission of community colleges, here are five well-supported predictions for AI’s impact in the coming years.

  1. Universal AI Tutor Access
  2. AI as Active Teacher
  3. Personalized Learning Pathways
  4. Interactive Multimodal Learning
  5. Value-Centric Education in an AI-Abundant World

 
© 2025 | Daniel Christian