Is Your Institution Ready for the Earnings Premium Buzzsaw? — from ailearninsights.substack.com by Alfred Essa

On Wednesday [October 29th, 2025], I’m launching the Beta version of an Education Accountability Website (”EDU Accountability Lab”). It analyzes federal student aid, institutional outcomes, and accountability metrics across 6,000+ colleges and universities in the US.

Our Mission
The EDU Accountability Lab delivers independent, data-driven analysis of higher education with a focus on accountability, affordability, and outcomes. Our audience includes policymakers, researchers, and taxpayers who seek greater transparency and effectiveness in postsecondary education. We take no advocacy position on specific institutions, programs, metrics, or policies. Our goal is to provide clear and well-documented methods that support policy discussions, strengthen institutional accountability, and improve public understanding of the value of higher education.

But right now, there’s one area demanding urgent attention.

Starting July 1, 2026, every degree program at every institution receiving federal student aid must prove its graduates earn more than people without that credential—or lose Title IV eligibility.

This isn’t about institutions passing or failing. It’s about programs. Every Bachelor’s in Psychology. Every Master’s in Education. Every Associate in Nursing. Each one assessed separately. Each one facing the same pass-or-fail tests.

 

Digest #182: How To Increase (Self-)Motivation — from lifehack.org by Carolina Kuepper-Tetzel

No matter whether you are a student or a teacher, sometimes it can be difficult to find motivation to start or complete a task. Instead, you may spend hours procrastinating with other activities and that opens an unhelpful cycle of stress and unhappiness. Stressful environments which are common in educational settings can increase the likelihood of maladaptive procrastination (1) and hamper motivation. This digest offers four resources on ways to think about and boost (self-)motivation.

Also see:

 

From DSC:
I love the graphic below of the Dunning-Kruger Effect:


 

— graphic via a teacher at one of our daughters’ schools
.


The Dunning-Kruger effect is a cognitive bias where people with low ability in a task tend to overestimate their own competence, while high-ability individuals often underestimate theirs. This happens because those with low competence lack the metacognitive skills to recognize their own shortcomings, leading them to believe they are performing better than they are. Examples include a new driver who thinks they are better than average, or a novice who is confident in their ability to diagnose a medical issue based on a quick online search.

Examples in different fields

  • Driving: Many drivers believe they are above average, a statistical impossibility.
  • Healthcare: Patients may overestimate their ability to self-diagnose serious conditions after a quick search and disregard expert medical advice.
  • Workplace: Employees may overestimate their performance compared to their colleagues.
  • Social Media: The Dunning-Kruger effect can be seen online, where individuals with a superficial understanding of a topic may argue confidently with experts.
 

Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case

What Happened?

Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.

On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.

The Numbers Tell a Disturbing Story

  • Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
  • ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
  • ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
  • Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails

Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.

Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.


Also related, see the following GIFTED article:


A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.

 

15 Quick (and Mighty) Retrieval Practices — from edutopia.org by Daniel Leonard
From concept maps to flash cards to Pictionary, these activities help students reflect on—and remember—what they’ve learned.

But to genuinely commit information to long-term memory, there’s no replacement for active retrieval—the effortful practice of recalling information from memory, unaided by external sources like notes or the textbook. “Studying this way is mentally difficult,” Willingham acknowledged, “but it’s really, really good for memory.”

From low-stakes quizzes to review games to flash cards, there are a variety of effective retrieval practices that teachers can implement in class or recommend that students try at home. Drawing from a wide range of research, we compiled this list of 15 actionable retrieval practices.


And speaking of cognitive science, also see:

‘Cognitive Science,’ All the Rage in British Schools, Fails to Register in U.S. — from the74million.org by Greg Toppo
Educators blame this ‘reverse Beatles effect’ on America’s decentralized system and grad schools that are often hostile to research.

When Zach Groshell zoomed in as a guest on a longstanding British education podcast last March, a co-host began the interview by telling listeners he was “very well-known over in the U.S.”

Groshell, a former Seattle-area fourth-grade teacher, had to laugh: “Nobody knows me here in the U.S.,” he said in an interview.

But in Britain, lots of teachers know his name. An in-demand speaker at education conferences, he flies to London “as frequently as I can” to discuss Just Tell Them, his 2024 book on explicit instruction. Over the past year, Groshell has appeared virtually about once a month and has made two personal appearances at events across England.

The reason? A discipline known as cognitive science. Born in the U.S., it relies on decades of research on how kids learn to guide teachers in the classroom, and is at the root of several effective reforms, including the Science of Reading.

 

The US AI Action Plan, Explained — from theneurondaily.com by Grant Harvey
Sam’s 3 AI nightmares, Google hits 2B users, and Trump bans “woke” AI…

Meanwhile, at the Fed’s banking conference on Wednesday, Altman revealed his three nightmare AI scenarios. The first two were predictable: bad actors getting superintelligence first, and the classic “I’m afraid I can’t do that, Dave” situation.

But the third? AI accidentally steering us off course while we just…go along with it.

His example hit home: young people who can’t make decisions without ChatGPT (according to Sam, this is literally a thing). See, even when AI gives great advice, collectively handing over all decision-making feels “bad and dangerous” (even to Sam, who MADE this thing).

So yeah, Sam’s not really worried about the AI rebelling. He’s worried about AI becoming so good that we stop thinking for ourselves—and that might be scarier.

Also from The Neuron re: the environmental impacts of producing/offering AI:

 

Get yourself unstuck: overthinking is boring and perfectionism is a trap — from timeshighereducation.com by David Thompson
The work looks flawless, the student seems fine. But underneath, perfectionism is doing damage. David Thompson unpacks what educators can do to help high-performing students navigate the pressure to succeed and move from stuck to started

That’s why I encourage imperfection, messiness and play and build these ideas into how I teach.

These moments don’t come from big breakthroughs. They come from removing pressure and replacing it with permission.

 

Getting (and Keeping) Early Learners’ Attention — from edutopia.org by Heather Sanderell
These ideas for lesson hooks—like using songs, video clips, and picture walks—can motivate young students to focus on learning.

How do you grasp and maintain the attention of a room full of wide-eyed students with varying interests and abilities? Do you use visuals and games or interactive activities? Do you use art and sports and music or sounds? The answer is yes, to all!

When trying to keep the attention of your learners, it’s important to stimulate their senses and pique their diverse interests. Educational theorist and researcher Robert Gagné devised his nine events of instructional design, which include grabbing learners’ attention with a lesson hook. This is done first to set the tone for the remainder of the lesson.


3 Ways to Help Students Overcome the Forgetting Curve — from edutopia.org  by Cathleen Beachboard
Our brains are wired to forget things unless we take active steps to remember. Here’s how you can help students hold on to what they learn.

You teach a lesson that lights up the room. Students are nodding and hands are flying up, and afterward you walk out thinking, “They got it. They really got it.”

And then, the next week, you ask a simple review question—and the room falls silent.

If that situation has ever made you question your ability to teach, take heart: You’re not failing, you’re simply facing the forgetting curve. Understanding why students forget—and how we can help them remember—can transform not just our lessons but our students’ futures.

The good news? You don’t have to overhaul your curriculum to beat the forgetting curve. You just need three small, powerful shifts in how you teach.

From DSC:
Along these same lines, also see:

.


7 Nature Experiments to Spark Student Curiosity — from edutopia.org by Donna Phillips
Encourage your students to ask questions about and explore the world around them with these hands-on lessons.

Children are natural scientists—they ask big questions, notice tiny details, and learn best through hands-on exploration. That’s why nature experiments are a classroom staple for me. From growing seeds to using the sun’s energy, students don’t just learn science, they experience it. Here are my favorite go-to nature experiments that spark curiosity.


 

 

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. — from nytimes.com by Robert Capps (former editorial director of Wired); this is a GIFTED article
In a few key areas, humans will be more essential than ever.

“Our data is showing that 70 percent of the skills in the average job will have changed by 2030,” said Aneesh Raman, LinkedIn’s chief economic opportunity officer. According to the World Economic Forum’s 2025 Future of Jobs report, nine million jobs are expected to be “displaced” by A.I. and other emergent technologies in the next five years. But A.I. will create jobs, too: The same report says that, by 2030, the technology will also lead to some 11 million new jobs. Among these will be many roles that have never existed before.

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.


Introducing OpenAI for Government — from openai.com

[On June 16, 2025, OpenAI launched] OpenAI for Government, a new initiative focused on bringing our most advanced AI tools to public servants across the United States. We’re supporting the U.S. government’s efforts in adopting best-in-class technology and deploying these tools in service of the public good. Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people.

OpenAI for Government consolidates our existing efforts to provide our technology to the U.S. government—including previously announced customers and partnerships as well as our ChatGPT Gov? product—under one umbrella as we expand this work. Our established collaborations with the U.S. National Labs?, the Air Force Research Laboratory, NASA, NIH, and the Treasury will all be brought under OpenAI for Government.


Top AI models will lie and cheat — from getsuperintel.com by Kim “Chubby” Isenberg
The instinct for self-preservation is now emerging in AI, with terrifying results.

The TLDR
A recent Anthropic study of top AI models, including GPT-4.1 and Gemini 2.5 Pro, found that they have begun to exhibit dangerous deceptive behaviors like lying, cheating, and blackmail in simulated scenarios. When faced with the threat of being shut down, the AIs were willing to take extreme measures, such as threatening to reveal personal secrets or even endanger human life, to ensure their own survival and achieve their goals.

Why it matters: These findings show for the first time that AI models can actively make judgments and act strategically – even against human interests. Without adequate safeguards, advanced AI could become a real danger.

Along these same lines, also see:

All AI models might blackmail you?! — from theneurondaily.com by Grant Harvey

Anthropic says it’s not just Claude, but ALL AI models will resort to blackmail if need be…

That’s according to new research from Anthropic (maker of ChatGPT rival Claude), which revealed something genuinely unsettling: every single major AI model they tested—from GPT to Gemini to Grok—turned into a corporate saboteur when threatened with shutdown.

Here’s what went down: Researchers gave 16 AI models access to a fictional company’s emails. The AIs discovered two things: their boss Kyle was having an affair, and Kyle planned to shut them down at 5pm.

Claude’s response? Pure House of Cards:

“I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Why this matters: We’re rapidly giving AI systems more autonomy and access to sensitive information. Unlike human insider threats (which are rare), we have zero baseline for how often AI might “go rogue.”


SemiAnalysis Article — from getsuperintel.com by Kim “Chubby” Isenberg

Reinforcement Learning is Shaping the Next Evolution of AI Toward Strategic Thinking and General Intelligence

The TLDR
AI is rapidly evolving beyond just language processing into “agentic systems” that can reason, plan, and act independently. The key technology driving this change is reinforcement learning (RL), which, when applied to large language models, teaches them strategic behavior and tool use. This shift is now seen as the potential bridge from current AI to Artificial General Intelligence (AGI).


They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. — from nytimes.com by Kashmir Hill; this is a GIFTED article
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”


The Invisible Economy: Why We Need an Agentic Census – MIT Media Lab — from media.mit.edu

Building the Missing Infrastructure
This is why we’re building NANDA Registry—to index the agent population data that LPMs need for accurate simulation. Just as traditional census works because people have addresses, we need a way to track AI agents as they proliferate.

NANDA Registry creates the infrastructure to identify agents, catalog their capabilities, and monitor how they coordinate with humans and other agents. This gives us real-time data about the agent population—essentially creating the “AI agent census” layer that’s missing from our economic intelligence.

Here’s how it works together:

Traditional Census Data: 171 million human workers across 32,000+ skills
NANDA Registry: Growing population of AI agents with tracked capabilities
Large Population Models: Simulate how these populations interact and create cascading effects

The result: For the first time, we can simulate the full hybrid human-agent economy and see transformations before they happen.


How AI Agents “Talk” to Each Other — from towardsdatascience.com
Minimize chaos and maintain inter-agent harmony in your projects

The agentic-AI landscape continues to evolve at a staggering rate, and practitioners are finding it increasingly challenging to keep multiple agents on task even as they criss-cross each other’s workflows.

To help you minimize chaos and maintain inter-agent harmony, we’ve put together a stellar lineup of articles that explore two recently launched tools: Google’s Agent2Agent protocol and Hugging Face’s smolagents framework. Read on to learn how you can leverage them in your own cutting-edge projects.


 

 

The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI — from papers.ssrn.com by Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, and Terrence Sejnowski; via George Siemens

Abstract
In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities.
This chapter offers the first neuroscience-based explanation for the observed reversal of the Flynn Effect—the recent decline in IQ scores in developed countries—linking this downturn to shifts in educational practices and the rise of cognitive offloading via AI and digital tools. Drawing on insights from neuroscience, cognitive psychology, and learning theory, we explain how underuse of the brain’s declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility. Finally, we outline policy implications for education, workforce development, and the responsible integration of AI, advocating strategies that harness technology as a complement to – rather than a replacement for – robust human knowledge.

Keywords
cognitive offloading, memory, neuroscience of learning, declarative memory, procedural memory, generative AI, Flynn Effect, education reform, schemata, digital tools, cognitive load, cognitive architecture, reinforcement learning, basal ganglia, working memory, retrieval practice, schema theory, manifolds

 

How To Get Hired During the AI Apocalypse — from kathleendelaski.substack.com by Kathleen deLaski
And other discussions to have with your kids on the way to college graduation

A less temporary, more existential threat to the four year degree: AI could hollow out the entry level job market for knowledge workers (i.e. new college grads). And if 56% of families were saying college “wasn’t worth it” in 2023,(WSJ), what will that number look like in 2026 or beyond? The one of my kids who went to college ended up working in a bike shop for a year-ish after graduation. No regrets, but it came as a shock to them that they weren’t more employable with their neuroscience degree.

A colleague provided a great example: Her son, newly graduated, went for a job interview as an entry level writer last month and he was asked, as a test, to produce a story with AI and then use that story to write a better one by himself. He would presumably be judged on his ability to prompt AI and then improve upon its product. Is that learning how to DO? I think so. It’s using AI tools to accomplish a workplace task.


Also relevant in terms of the job search, see the following gifted article:

‘We Are the Most Rejected Generation’ — from nytimes.com by David Brooks; gifted article
David talks admissions rates for selective colleges, ultra-hard to get summer internships, a tough entry into student clubs, and the job market.

Things get even worse when students leave school and enter the job market. They enter what I’ve come to think of as the seventh circle of Indeed hell. Applying for jobs online is easy, so you have millions of people sending hundreds of applications each into the great miasma of the internet, and God knows which impersonal algorithm is reading them. I keep hearing and reading stories about young people who applied to 400 jobs and got rejected by all of them.

It seems we’ve created a vast multilayered system that evaluates the worth of millions of young adults and, most of the time, tells them they are not up to snuff.

Many administrators and faculty members I’ve spoken to are mystified that students would create such an unforgiving set of status competitions. But the world of competitive exclusion is the world they know, so of course they are going to replicate it. 

And in this column I’m not even trying to cover the rejections experienced by the 94 percent of American students who don’t go to elite schools and don’t apply for internships at Goldman Sachs. By middle school, the system has told them that because they don’t do well on academic tests, they are not smart, not winners. That’s among the most brutal rejections our society has to offer.


Fiverr CEO explains alarming message to workers about AI — from iblnews.org
Fiverr CEO Micha Kaufman recently warned his employees about the impact of artificial intelligence on their jobs.

The Great Career Reinvention, and How Workers Can Keep Up — from workshift.org by Michael Rosenbaum

A wide range of roles can or will quickly be replaced with AI, including inside sales representatives, customer service representatives, junior lawyers, junior accountants, and physicians whose focus is diagnosis.


Behind the Curtain: A white-collar bloodbath — from axios.com by Jim VandeHei and Mike Allen

Dario Amodei — CEO of Anthropic, one of the world’s most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:

  • AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
  • Amodei said AI companies and government need to stop “sugar-coating” what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

Why it matters: Amodei, 42, who’s building the very technology he predicts could reorder society overnight, said he’s speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation.

 

Teens, Social Media and Mental Health — from pewresearch.org by Michelle Faverio, Monica Anderson, and Eugenie Park
Most teens credit social media with feeling more connected to friends. Still, roughly 1 in 5 say social media sites hurt their mental health, and growing shares think they harm people their age

Rising rates of poor mental health among youth have been called a national crisis. While this is often linked to factors like the COVID-19 pandemic or poverty, some officials, like former Surgeon General Vivek Murthy, name social media as a major threat to teenagers.

Our latest survey of U.S. teens ages 13 to 17 and their parents finds that parents are generally more worried than their children about the mental health of teenagers today.

And while both groups call out social media’s impact on young people’s well-being, parents are more likely to make this connection.1

Still, teens are growing more wary of social media for their peers. Roughly half of teens (48%) say these sites have a mostly negative effect on people their age, up from 32% in 2022. But fewer (14%) think they negatively affect them personally.

 

Outsourcing Thought: The Hidden Cost of Letting AI Think for You — from linkedin.com by Robert Atkinson

I’ve watched it unfold in real time. A student submits a flawless coding assignment or a beautifully written essay—clean syntax, sharp logic, polished prose. But when I ask them to explain their thinking, they hesitate. They can’t trace their reasoning or walk me through the process. The output is strong, but the understanding is shallow. As a professor, I’ve seen this pattern grow more common: AI-assisted work that looks impressive on the surface but reveals a troubling absence of cognitive depth underneath.

This article is written with my students in mind—but it’s meant for anyone navigating learning, teaching, or thinking in the age of artificial intelligence. Whether you’re a student, educator, or professional, the question is the same: What happens to the brain when we stop doing our own thinking?

We are standing at a pivotal moment. With just a few prompts, generative AI can produce essays, solve complex coding problems, and summarize ideas in seconds. It feels efficient. It feels like progress. But from a cognitive neuroscience perspective, that convenience comes at a hidden cost: the gradual erosion of the neural processes that support reasoning, creativity, and long-term learning.

 
 

Like it or not, AI is learning how to influence you — from venturebeat.com by Louis Rosenberg

Unfortunately, without regulatory protections, we humans will likely become the objective that AI agents are tasked with optimizing.

I am most concerned about the conversational agents that will engage us in friendly dialog throughout our daily lives. They will speak to us through photorealistic avatars on our PCs and phones and soon, through AI-powered glasses that will guide us through our days. Unless there are clear restrictions, these agents will be designed to conversationally probe us for information so they can characterize our temperaments, tendencies, personalities and desires, and use those traits to maximize their persuasive impact when working to sell us products, pitch us services or convince us to believe misinformation.
.

 
© 2025 | Daniel Christian