ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?
This week, an important piece of researchlanded that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.
The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).
If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.
We dive into the OpenAI study that found AI models scheme to avoid retraining.
AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)
And models were caught doing all sorts of other shady stuff:
Intentionally underperforming when told high scores would lead to capability removal.
Secretly breaking rules then lying about it to users.
Sabotaging code while claiming everything was fine.
Falsifying reports to avoid “shutdown” scenarios.
Recognizing evaluation environments and thinking “this is probably an alignment test.”
Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.
The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.
From DSC: This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?
1. #AI adoption is delivering real results for early movers Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:
17% report cost savings or revenue growth of at least 10% from AI.
Almost 80% say their AI investments have met or exceeded expectations.
Half worry they are not moving fast enough and could fall behind competitors.
The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue PLUS: Startup produces 3,000 AI podcast episodes weekly
The details:
Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
The AI avatar will evaluate and award all public tenders where the government contracts private firms.
Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.
In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.
To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.
A day in the life: The next 25 years A learner wakes up. Their AI-powered learning coach welcomes them, drawing their attention to their progress and helping them structure their approach to the day. A notification reminds them of an upcoming interview and suggests reflections to add to their learning portfolio.
Rather than a static gradebook, their portfolio is a dynamic, living record, curated by the student, validated by mentors in both industry and education, and enriched through co-creation with maturing modes of AI. It tells a story through essays, code, music, prototypes, journal reflections, and team collaborations. These artifacts are not “submitted”, they are published, shared, and linked to verifiable learning outcomes.
And when it’s time to move, to a new institution, a new job, or a new goal, their data goes with them, immutable, portable, verifiable, and meaningful.
From DSC: And I would add to that last solid sentence that the learner/student/employee will be able to control who can access this information. Anyway, some solid reflections here from Lev.
I know a lot of readers will disagree with this, and the timeline feels aggressive (the future always arrives more slowly than pundits expect) but I think the overall premise is sound: “The concept of a tipping point in education – where AI surpasses traditional schools as the dominant learning medium – is increasingly plausible based on current trends, technological advancements, and expert analyses.”
The Rundown: In this tutorial, you will learn how to combine NotebookLM with ChatGPT to master any subject faster, turning dense PDFs into interactive study materials with summaries, quizzes, and video explanations.
Step-by-step:
Go to notebooklm.google.com, click the “+” button, and upload your PDF study material (works best with textbooks or technical documents)
Choose your output mode: Summary for a quick overview, Mind Map for visual connections, or Video Overview for a podcast-style explainer with visuals
Generate a Study Guide under Reports — get Q&A sets, short-answer questions, essay prompts, and glossaries of key terms automatically
Take your PDF to ChatGPT and prompt: “Read this chapter by chapter and highlight confusing parts” or “Quiz me on the most important concepts”
Combine both tools: Use NotebookLM for quick context and interactive guides, then ChatGPT to clarify tricky parts and go deeperPro Tip: If your source is in EPUB or audiobook, convert it to PDF before uploading. Both NotebookLM and ChatGPT handle PDFs best.
Claude can now create and edit Excel spreadsheets, documents, PowerPoint slide decks, and PDFs directly in Claude.ai and the desktop app. This transforms how you work with Claude—instead of only receiving text responses or in-app artifacts, you can describe what you need, upload relevant data, and get ready-to-use files in return.
Also see:
Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic — from techcrunch.com byRebecca Bellan
Microsoft will pay to use Anthropic’s AI in Office 365 apps, The Information reports, citing two sources. The move means that Anthropic’s tech will help power new features in Word, Excel, Outlook, and PowerPoint alongside OpenAI’s, marking the end of Microsoft’s previous reliance solely on the ChatGPT maker for its productivity suite. Microsoft’s move to diversify its AI partnerships comes amid a growing rift with OpenAI, which has pursued its own infrastructure projects as well as a potential LinkedIn competitor.
In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.
Data from the State of AI and Instructional Design Report revealed that 95.3% of the instructional designers interviewed use AI in their daily work [1]. And over 85% of this AI use occurs during the design and development process.
These figures showcase the immense impact AI is already having on the instructional design world.
If you’re an L&D professional still on the fence about adding AI to your workflow or an AI convert looking for the next best tools, keep reading.
This guide breaks down 5 of the top AI tools for instructional designers in 2025, so you can streamline your development processes and build better training faster.
But before we dive into the tools of the trade, let’s address the elephant in the room:
GRAND RAPIDS, MI — A new course at Grand Rapids Community College aims to help students learn about artificial intelligence by using the technology to solve real-world business problems.
…
In a release, the college said its grant application was supported by 20 local businesses, including Gentex, TwistThink and the Grand Rapids Public Museum. The businesses have pledged to work with students who will use business data to develop an AI project such as a chatbot that interacts with customers, or a program that automates social media posts or summarizes customer data.
“This rapidly emerging technology can transform the way businesses process data and information,” Kristi Haik, dean of GRCC’s School of Science, Technology, Engineering and Mathematics, said in a statement. “We want to help our local business partners understand and apply the technology. We also want to create real experiences for our students so they enter the workforce with demonstrated competence in AI applications.”
As Patrick Bailey said on LinkedIn about this article:
Nice to see a pedagogy that’s setting a forward movement rather than focusing on what could go wrong with AI in a curriculum.
As a 30 year observer and participant, it seems to me that previous technology platform shifts like SaaS and mobile did not fundamentally change the LMS. AI is different. We’re standing at the precipice of LMS 2.0, where the branding change from Course Management System to Learning Management System will finally live up to its name. Unlike SaaS or mobile, AI represents a technology platform shift that will transform the way participants interact with learning systems – and with it, the nature of the LMS itself.
Given the transformational potential of AI, it is useful to set the context and think about how we got here, especially on this 30th anniversary of the LMS.
Where AI is disruptive is in its ability to introduce a whole new set of capabilities that are best described as personalized learning services. AI offers a new value proposition to the LMS, roughly the set of capabilities currently being developed in the AI Tutor / agentic TA segment. These new capabilities are so valuable given their impact on learning that I predict they will become the services with greatest engagement within a school or university’s “enterprise” instructional platform.
In this way, by LMS paradigm shift, I specifically mean a shift from buyers valuing the product on its course-centric and course management capabilities, to valuing it on its learner-centric and personalized learning capabilities.
This anthology reveals how the integration of AI in education poses profound philosophical, pedagogical, ethical and political questions. As this global AI ecosystem evolves and becomes increasingly ubiquitous, UNESCO and its partners have a shared responsibility to lead the global discourse towards an equity- and justice-centred agenda. The volume highlights three areas in which UNESCO will continue to convene and lead a global commons for dialog and action particularly in areas on AI futures, policy and practice innovation, and experimentation.
As guardian of ethical, equitable human-centred AI in education.
As thought leader in reimagining curriculum and pedagogy
As a platform for engaging pluralistic and contested dialogues
AI, copyright and the classroom: what higher education needs to know — from timeshighereducation.com by Cayce Myers As artificial intelligence reshapes teaching and research, one legal principle remains at the heart of our work: copyright. Understanding its implications isn’t just about compliance – it’s about protecting academic integrity, intellectual property and the future of knowledge creation. Cayce Myers explains
Why It Matters A decade from now, we won’t say “AI changed schools.” We’ll say: this was the year schools began to change what it means to be human, augmented by AI.
This transformation isn’t about efficiency alone. It’s about dignity, creativity, and discovery, and connecting education more directly to human flourishing. The industrial age gave us schools to produce cookie-cutter workers. The digital age gave us knowledge anywhere, anytime. The AI age—beginning now—gives us back what matters most: the chance for every learner to become infinitely capable.
This fall may look like any other—bells ringing, rows of desks—but beneath the surface, education has begun its greatest transformation since the one-room schoolhouse.
Transactional and transformational leaderships’ combined impact on AI and trust Given the volatile times we live in, a leader may find themselves in a situation where they know how they will use AI, but they are not entirely clear on the goals and journey. In a teaching context, students can be given scenarios where they must lead a team, including autonomous AI agents, to achieve goals. They can then analyse the situations and decide what leadership styles to apply and how to build trust in their human team members. Educators can illustrate this decision-making process using a table (see above).
They may need to combine transactional leadership with transformational leadership, for example. Transactional leadership focuses on planning, communicating tasks clearly and an exchange of value. This works well with both humans and automated AI agents.
Real, capability-building learning requires three key elements: content, context and conversation.
The Rise Of AI Agents: Teaching At Scale
The generative AI revolution is often framed in terms of efficiency: faster content creation, automated processes and streamlined workflows. But in the world of L&D, its most transformative potential lies elsewhere: the ability to scale great teaching.
AI gives us the means to replicate the role of an effective teacher across an entire organization. Specifically, AI agents—purpose-built systems that understand, adapt and interact in meaningful, context-aware ways—can make this possible. These tools understand a learner’s role, skill level and goals, then tailor guidance to their specific challenges and adapt dynamically over time. They also reinforce learning continuously, nudging progress and supporting application in the flow of work.
More than simply sharing knowledge, an AI agent can help learners apply it and improve with every interaction. For example, a sales manager can use a learning agent to simulate tough customer scenarios, receive instant feedback based on company best practices and reinforce key techniques. A new hire in the product department could get guidance on the features and on how to communicate value clearly in a roadmap meeting.
In short, AI agents bring together the three essential elements of capability building, not in a one-size-fits-all curriculum but on demand and personalized for every learner. While, obviously, this technology shouldn’t replace human expertise, it can be an effective tool for removing bottlenecks and unlocking effective learning at scale.
As news organizations scramble to update their digital toolkits, I invited one of the most tech savvy journalism advisors I know to share his guidance.
In the guest post below, Joe Amditis shares a bunch of useful resources. A former CUNY student of mine, Joe now serves as associate director of operations at the Center for Cooperative Media at Montclair State University.
Bottom line: The best engineers became 100x better with AI coding tools. Now the same transformation is hitting law. Joel [the CTO at Thomson Reuters] predicts the best attorneys who master these tools will become 100x more powerful than before.
4. Legal Startups Reshape the Market for Judges and Practitioners
Legal services are no longer dominated by traditional providers. Business Insider reports on a new wave of nimble “Law Firm 2.0” entities—AI-enabled startups offering fixed cost services for specific tasks such as contract reviews or drafting. The LegalTech Lab is helping launch such disruptors with funding and guidance.
At the same time, alternative legal service providers or ALSPs are integrating generative AI, moving beyond cost-efficient support to providing legal advice and enhanced services—often on subscription models.
In 2025 so far, legal technology has moved from incremental adoption to integral transformation. Generative AI, investments, startups, and regulatory readiness are reshaping the practice of law—for lawyers, judges, and the rule of law.
I recently finished reading Ethan Mollick‘s excellent book on artificial intelligence, entitled Co-Intelligence: Living and Working with AI. He does a great job of explaining what it is, how it works, how it best can be used, and where it may be headed.
…
The first point that resonated with me is that artificial intelligence tools can take those with poor skills in certain areas and significantly elevate their output. For example, Mollick cited a study that demonstrated that the performance of law students at the bottom of their class got closer to that of the top students with the use of AI.
Lawyers and law firms need to begin thinking and planning for how the coming skill equalization will impact competition and potentially profitability. They need to consider how the value of what they provide to their clients will be greater than their competition. They need to start thinking about what skill will set them apart in the new AI driven world.
Welcome to the new normal: the AI First Draft. Clients—from everyday citizens to solo entrepreneurs to sophisticated in-house counsel—are increasingly using AI to create the first draft of legal documents before outside counsel even enters the conversation. Contracts, memos, emails, issue spotters, litigation narratives: AI can now do it all.
This means outside counsel is now navigating a very different kind of document review and client relationship. One that comes with hidden risks, awkward conversations, and new economic pressures.
Here are the three things every lawyer needs to start thinking about when reviewing client-generated work product.
1. The Prompt Problem: What Was Shared, and With Whom?… 2. The Confidence Barrier: When AI Sounds Right, But Isn’t… 3. The Economic Shift: Why AI Work Can Cost More, Not Less…
Business leaders across the world are grappling with a reality that would have seemed like science fiction just a few decades ago: Artificial intelligence systems dubbed AI agents are becoming colleagues, not just tools. At many organizations, HR pros are already developing balanced and thoughtful machine-people workforces that meet business goals.
At Skillsoft, a global corporate learning company, Chief People Officer Ciara Harrington has spent the better part of three years leading digital transformation in real time. Through her front-row seat to CEO transitions, strategic pivots and the rapid acceleration of AI adoption, she’s developed a strong belief that organizations must be agile with people operations.
‘No role that’s not a tech role’ Under these modern conditions, she says, technology is becoming a common language in the workplace. “There is no role that’s not a tech role,” Harrington said during a recent discussion about the future of work. It’s a statement that gets at the heart of a shift many HR leaders are still coming to terms with.
…
But a key question remains: Who will manage the AI agents, specifically, HR leaders or someone else?
SINGAPORE Sept. 3, 2025 /PRNewswire/ — Today, Midoo AIproudly announces the launch of the world’s first AI language learning agent, a groundbreaking innovation set to transform language education forever.
For decades, language learning has pursued one ultimate goal: true personalization. Traditional tools offered smart recommendations, gamified challenges, and pre-written role-play scripts—but real personalization remained out of reach. Midoo AI changes that. Here is the >launch video of Midoo AI.
Imagine a learning experience that evolves with you in real time. A system that doesn’t rely on static courses or scripts but creates a dynamic, one-of-a-kind language world tailored entirely to your needs. This is the power of Midoo’s Dynamic Generation technology.
“Midoo is not just a language-learning tool,” said Yvonne, co-founder of Midoo AI. “It’s a living agent that senses your needs, adapts instantly, and shapes an experience that’s warm, personal, and alive. Learning is no longer one-size-fits-all—now, it’s yours and yours alone.”
Language learning apps have traditionally focused on exercises, quizzes, and progress tracking. Midoo AI introduces a different approach. Instead of presenting itself as a course provider, it acts as an intelligent learning agent that builds, adapts, and sustains a learner’s journey.
This review examines how Midoo AI operates, its feature set, and what makes it distinct from other AI-powered tutors.
Midoo AI in Context: Purpose and Position
Midoo AI is not structured around distributing lessons or modules. Its core purpose is to provide an agent-like partner that adapts in real time. Where many platforms ask learners to select a “level” or “topic,”
Midoo instead begins by analyzing goals, usage context, and error patterns. The result is less about consuming predesigned units and more about co-constructing a pathway.
Turning Time Saved Into Better Learning
AI can save teachers time, but what can that time be used for (besides taking a breath)? For most of us, it means redirecting energy into the parts of teaching that made us want to pursue this profession in the first place: connecting with our students and helping them grow academically.
Differentiation Every classroom has students with different readiness levels, language needs, and learning preferences. AI tools like Diffit or MagicSchool can instantly create multiple versions of a passage or assignment, differentiated by grade level, complexity, or language. This allows every student to engage with the same core concept, moving together as one cohesive class. Instead of spending an evening retyping and rephrasing, teachers can review and tweak AI drafts in minutes, ready for the next lesson.
Mass Intelligence — from oneusefulthing.org by Ethan Mollick From GPT-5 to nano banana: everyone is getting access to powerful AI
When a billion people have access to advanced AI, we’ve entered what we might call the era of Mass Intelligence. Every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive. Now every profession, every institution, every community has to figure out how to thrive with Mass Intelligence. How do we harness a billion people using AI while managing the chaos that comes with it? How do we rebuild trust when anyone can fabricate anything? How do we preserve what’s valuable about human expertise while democratizing access to knowledge?
By the time today’s 9th graders and college freshman enter the workforce, the most disruptive waves of AGI and robotics may already be embedded into part society.
What replaces the old system will not simply be a more digital version of the same thing. Structurally, schools may move away from rigid age-groupings, fixed schedules, and subject silos. Instead, learning could become more fluid, personalized, and interdisciplinary—organized around problems, projects, and human development rather than discrete facts or standardized assessments.
AI tutors and mentors will allow for pacing that adapts to each student, freeing teachers to focus more on guidance, relationships, and high-level facilitation. Classrooms may feel less like miniature factories and more like collaborative studios, labs, or even homes—spaces for exploring meaning and building capacity, not just delivering content.
…
If students are no longer the default source of action, then we need to teach them to:
Design agents,
Collaborate with agents,
Align agentic systems with human values,
And most of all, retain moral and civic agency in a world where machines act on our behalf.
We are no longer educating students to be just doers.
We must now educate them to be judges, designers, and stewards of agency.
Meet Your New AI Tutor — from wondertools.substack.com by Jeremy Caplan Try new learning modes in ChatGPT, Claude, and Gemini
AI assistants are now more than simple answer machines. ChatGPT’s new Study Mode, Claude’s Learning Mode, and Gemini’s Guided Learningrepresent a significant shift. Instead of just providing answers, these free tools act as adaptive, 24/7 personal tutors.
That’s why, in preparation for my next bootcamp which kicks off September 8th 2025, I’ve just completed a full refresh of my list of the most powerful & popular AI tools for Instructional Designers, complete with tips on how to get the most from each tool.
The list has been created using my own experience + the experience of hundreds of Instructional Designers who I work with every week.
It contains the 50 most powerful AI tools for instructional design available right now, along with tips on how to optimise their benefits while mitigating their risks.
Addendums on 9/4/25:
AI Companies Roll Out Educational Tools — from insidehighered.com by Ray Schroeder This fall, Google, Anthropic and OpenAI are rolling out powerful new AI tools for students and educators, each taking a different path to shape the future of learning.
So here’s the new list of essential skills I think my students will need when they are employed to work with AI five years from now:
They can follow directions, analyze outcomes, and adapt to change when needed.
They can write or edit AI to capture a unique voice and appropriate tone in sync with an audience’s needs
They have a deep understanding of one or more content areas of a particular profession, business, or industry, so they can easily identify factual errors.
They have a strong commitment to exploration, a flexible mindset, and a broad understanding of AI literacy.
They are resilient and critical thinkers, ready to question results and demand better answers.
They are problem solvers.
And, of course, here is a new rubric built on those skills:
What’s changing is not the foundation—it’s the ecosystem. Teams are looking to create more flexible, scalable, and diverse learning experiences that meet people where they are.
What Did We Explore? Everyone seems to have a take on what’s happening in L&D these days. From bold claims about six-figure roles to debates over whether portfolios or degrees matter more, everyone seems to have a take. So, we wanted to get to the heart of it by exploring five of the biggest, most debated areas shaping our work today:
Salaries: Are compensation trends really keeping pace with the value we deliver?
Hiring: What skills are managers actually looking for—and are those ATS horror stories true?
Portfolios: Are portfolios helping candidates stand out, and what are hiring managers actually looking for?
Tools & Modalities: What types of training are teams building, and what tools are they using to build it?
Artificial Intelligence: Who’s using it, how, and what concerns still exist?
These five areas are shaping the future of instructional design—not just for job seekers, but for team leaders, hiring managers, and the entire ecosystem of L&D professionals.
The takeaway? A portfolio is more than a collection of projects—it’s a storytelling tool. The ones that stand out highlight process, decision-making, and results—not just pretty screens.
Educators use AI in and out of the classroom Educators’ uses range from developing course materials and writing grant proposals to academic advising and managing administrative tasks like admissions and financial planning.
Educators aren’t just using chatbots; they’re building their own custom tools with AI
Faculty are using Claude Artifacts to create interactive educational materials, such as chemistry simulations, automated grading rubrics, and data visualization dashboards.
Educators tend to automate the drudgery while staying in the loop for everything else
Tasks requiring significant context, creativity, or direct student interaction—like designing lessons, advising students, and writing grant proposals—are where educators are more likely to use AI as an enhancement. In contrast, routine administrative work such as financial management and record-keeping are more automation-heavy.
Some educators are automating grading; others are deeply opposed
In our Claude.ai data, faculty used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way (where the AI directly performs the task). That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective.
Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.
On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.
The Numbers Tell a Disturbing Story
Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails
Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.
Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.
Also related, see the following GIFTED article:
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.