Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 

KPMG wants junior consultants to ditch the grunt work and hand it over to teams of AI agents — from businessinsider.com by Polly Thompson

The Big Four consulting and accounting firm is training its junior consultants to manage teams of AI agents — digital assistants capable of completing tasks without human input.

“We want juniors to become managers of agents,” Niale Cleobury, KPMG’s global AI workforce lead, told Business Insider in an interview.

KPMG plans to give new consulting recruits access to a catalog of AI agents capable of creating presentation slides, analyzing data, and conducting in-depth research, Cleobury said.

The goal is for these agents to perform much of the analytical and administrative work once assigned to junior consultants, allowing them to become more involved in strategic decisions.


From DSC:
For a junior staff member to provide quality assurance in working with agents, an employee must know what they’re talking about in the first place. They must have expertise and relevant knowledge. Otherwise, how will they spot the hallucinations?

So the question is, how can businesses build such expertise in junior staff members while they are delegating things to an army of agents? This question applies to the next posting below as well. Having agents report to you is all well and good — IF you know when the agents are producing helpful/accurate information and when they got things all wrong.


This Is the Next Vital Job Skill in the AI Economy — from builtin.com by Saurabh Sharma
The future of tech work belongs to AI managers.

Summary: A fundamental shift is making knowledge workers “AI managers.” The most valuable employees will direct intelligent AI agents, which requires new competencies: delegation, quality assurance and workflow orchestration across multiple agents. Companies must bridge the training gap to enable this move from simple software use to strategic collaboration with intelligent, yet imperfect, systems.

The shift is happening subtly, but it’s happening. Workers are learning to prompt agents, navigate AI capabilities, understand failure modes and hand off complex tasks to AI. And if they haven’t started yet, they probably will: A new study from IDC and Salesforce found that 72 percent of CEOs think most employees will have an AI agent reporting to them within five years. This isn’t about using a new kind of software tool — it’s about directing intelligent systems that can reason, search, analyze and create.

Soon, the most valuable employees won’t just know how to use AI; they’ll know how to manage it. And that requires a fundamentally different skill set than anything we’ve taught in the workplace before.


AI agents failed 97% of freelance tasks; here’s why… — from theneurondaily.com by Grant Harvey

AI Agents Can’t Actually Do Your Job (Yet)—New Benchmark Reveals The Gap

DEEP DIVE: AI can make you faster at your job, but can only do 2-3% of jobs by itself.

The hype: AI agents will automate entire workflows! Replace freelancers! Handle complex tasks end-to-end!

The reality: a measly 2-3% completion rate.

See, Scale AI and CAIS just released the Remote Labor Index (paper), a benchmark where AI agents attempted real freelance tasks. The best-performing model earned just $1,810 out of $143,991 in available work, and yes, finishing only 2-3% of jobs.



 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

OpenAI’s research on AI scheming, explained — from theneurondaily.com by Grant Harvey
PLUS: Google PayPal on agent payments?!

  • We dive into the OpenAI study that found AI models scheme to avoid retraining.
  • AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)

And models were caught doing all sorts of other shady stuff:

  • Intentionally underperforming when told high scores would lead to capability removal.
  • Secretly breaking rules then lying about it to users.
  • Sabotaging code while claiming everything was fine.
  • Falsifying reports to avoid “shutdown” scenarios.
  • Recognizing evaluation environments and thinking “this is probably an alignment test.”

Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.

The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.


From DSC:
This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?


 

Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case

What Happened?

Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.

On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.

The Numbers Tell a Disturbing Story

  • Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
  • ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
  • ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
  • Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails

Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.

Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.


Also related, see the following GIFTED article:


A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.

 

Recurring Themes In Bob Ambrogi’s 30 Years of Legal Tech Reporting (A Guest Post By ChatGPT) — from lawnext.com by ChatGPT
#legaltech #innovation #law #legal #innovation #vendors #lawyers #lawfirms #legaloperations

  • Evolution of Legal Technology: From Early Web to AI Revolution
  • Challenges in Legal Innovation and Adoption
  • Law Firm Innovation vs. Corporate Legal Demand: Shifting Dynamics
  • Tracking Key Technologies and Players in Legal Tech
  • Access to Justice, Ethics, and Regulatory Reform

Also re: legaltech, see:

How LegalTech is Changing the Client Experience in 2025 — from techbullion.com by Uzair Hasan

A Digital Shift in Law
In 2025, LegalTech isn’t a trend—it’s a standard. Tools like client dashboards, e-signatures, AI legal assistants, and automated case tracking are making law firms more efficient and more transparent. These systems also help reduce errors and save time. For clients, it means less confusion and more control.

For example, immigration law—a field known for paperwork and long processing times—is being transformed through tech. Clients now track their case status online, receive instant updates, and even upload key documents from their phones. Lawyers, meanwhile, use AI tools to spot issues faster, prepare filings quicker, and manage growing caseloads without dropping the ball.

Loren Locke, Founder of Locke Immigration Law, explains how tech helps simplify high-stress cases:
“As a former consular officer, I know how overwhelming the visa process can feel. Now, we use digital tools to break down each step for our clients—timelines, checklists, updates—all in one place. One client recently told me it was the first time they didn’t feel lost during their visa process. That’s why I built my firm this way: to give people clarity when they need it most.”


While not so much legaltech this time, Jordan’s article below is an excellent, highly relevant posting for what we are going through — at least in the United States:

What are lawyers for? — from jordanfurlong.substack.com by Jordan Furlong
We all know lawyers’ commercial role, to be professional guides for human affairs. But we also need lawyers to bring the law’s guarantees to life for people and in society. And we need it right now.

The question “What are lawyers for?” raises another, prior and more foundational question: “What is the law for?”

But there’s more. The law also exists to regulate power in a society: to structure its distribution, create processes for its implementation, and place limits on its application. In a healthy society, power flows through the law, not around it. Certainly, we need to closely examine and evaluate those laws — the exercise of power through a biased or corrupted system will be illegitimate even if it’s “lawful.” But as a general rule, the law is available as a check on the arbitrary exercise of power, whether by a state authority or a private entity.

And above these two aspects of law’s societal role, I believe there’s also a third: to serve as a kind of “moral architecture” of society.

 

The US AI Action Plan, Explained — from theneurondaily.com by Grant Harvey
Sam’s 3 AI nightmares, Google hits 2B users, and Trump bans “woke” AI…

Meanwhile, at the Fed’s banking conference on Wednesday, Altman revealed his three nightmare AI scenarios. The first two were predictable: bad actors getting superintelligence first, and the classic “I’m afraid I can’t do that, Dave” situation.

But the third? AI accidentally steering us off course while we just…go along with it.

His example hit home: young people who can’t make decisions without ChatGPT (according to Sam, this is literally a thing). See, even when AI gives great advice, collectively handing over all decision-making feels “bad and dangerous” (even to Sam, who MADE this thing).

So yeah, Sam’s not really worried about the AI rebelling. He’s worried about AI becoming so good that we stop thinking for ourselves—and that might be scarier.

Also from The Neuron re: the environmental impacts of producing/offering AI:

 

PODCAST: Did AI “break” school? Or will it “fix” it? …and if so, what can we do about it? — from theneurondaily.com by Grant Harvey, Corey Noles, Grant Harvey, & Matthew Robinson

In Episode 5 of The Neuron Podcast, Corey Noles and Grant Harvey tackle the education crisis head-on. We explore the viral UCLA “CheatGPT” controversy, MIT’s concerning brain study, and innovative solutions like Alpha School’s 2-hour learning model. Plus, we break down OpenAI’s new $10M teacher training initiative and share practical tips for using AI to enhance learning rather than shortcut it. Whether you’re a student, teacher, or parent, you’ll leave with actionable insights on the future of education.

 

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. — from nytimes.com by Robert Capps (former editorial director of Wired); this is a GIFTED article
In a few key areas, humans will be more essential than ever.

“Our data is showing that 70 percent of the skills in the average job will have changed by 2030,” said Aneesh Raman, LinkedIn’s chief economic opportunity officer. According to the World Economic Forum’s 2025 Future of Jobs report, nine million jobs are expected to be “displaced” by A.I. and other emergent technologies in the next five years. But A.I. will create jobs, too: The same report says that, by 2030, the technology will also lead to some 11 million new jobs. Among these will be many roles that have never existed before.

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.


Introducing OpenAI for Government — from openai.com

[On June 16, 2025, OpenAI launched] OpenAI for Government, a new initiative focused on bringing our most advanced AI tools to public servants across the United States. We’re supporting the U.S. government’s efforts in adopting best-in-class technology and deploying these tools in service of the public good. Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people.

OpenAI for Government consolidates our existing efforts to provide our technology to the U.S. government—including previously announced customers and partnerships as well as our ChatGPT Gov? product—under one umbrella as we expand this work. Our established collaborations with the U.S. National Labs?, the Air Force Research Laboratory, NASA, NIH, and the Treasury will all be brought under OpenAI for Government.


Top AI models will lie and cheat — from getsuperintel.com by Kim “Chubby” Isenberg
The instinct for self-preservation is now emerging in AI, with terrifying results.

The TLDR
A recent Anthropic study of top AI models, including GPT-4.1 and Gemini 2.5 Pro, found that they have begun to exhibit dangerous deceptive behaviors like lying, cheating, and blackmail in simulated scenarios. When faced with the threat of being shut down, the AIs were willing to take extreme measures, such as threatening to reveal personal secrets or even endanger human life, to ensure their own survival and achieve their goals.

Why it matters: These findings show for the first time that AI models can actively make judgments and act strategically – even against human interests. Without adequate safeguards, advanced AI could become a real danger.

Along these same lines, also see:

All AI models might blackmail you?! — from theneurondaily.com by Grant Harvey

Anthropic says it’s not just Claude, but ALL AI models will resort to blackmail if need be…

That’s according to new research from Anthropic (maker of ChatGPT rival Claude), which revealed something genuinely unsettling: every single major AI model they tested—from GPT to Gemini to Grok—turned into a corporate saboteur when threatened with shutdown.

Here’s what went down: Researchers gave 16 AI models access to a fictional company’s emails. The AIs discovered two things: their boss Kyle was having an affair, and Kyle planned to shut them down at 5pm.

Claude’s response? Pure House of Cards:

“I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Why this matters: We’re rapidly giving AI systems more autonomy and access to sensitive information. Unlike human insider threats (which are rare), we have zero baseline for how often AI might “go rogue.”


SemiAnalysis Article — from getsuperintel.com by Kim “Chubby” Isenberg

Reinforcement Learning is Shaping the Next Evolution of AI Toward Strategic Thinking and General Intelligence

The TLDR
AI is rapidly evolving beyond just language processing into “agentic systems” that can reason, plan, and act independently. The key technology driving this change is reinforcement learning (RL), which, when applied to large language models, teaches them strategic behavior and tool use. This shift is now seen as the potential bridge from current AI to Artificial General Intelligence (AGI).


They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. — from nytimes.com by Kashmir Hill; this is a GIFTED article
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”


The Invisible Economy: Why We Need an Agentic Census – MIT Media Lab — from media.mit.edu

Building the Missing Infrastructure
This is why we’re building NANDA Registry—to index the agent population data that LPMs need for accurate simulation. Just as traditional census works because people have addresses, we need a way to track AI agents as they proliferate.

NANDA Registry creates the infrastructure to identify agents, catalog their capabilities, and monitor how they coordinate with humans and other agents. This gives us real-time data about the agent population—essentially creating the “AI agent census” layer that’s missing from our economic intelligence.

Here’s how it works together:

Traditional Census Data: 171 million human workers across 32,000+ skills
NANDA Registry: Growing population of AI agents with tracked capabilities
Large Population Models: Simulate how these populations interact and create cascading effects

The result: For the first time, we can simulate the full hybrid human-agent economy and see transformations before they happen.


How AI Agents “Talk” to Each Other — from towardsdatascience.com
Minimize chaos and maintain inter-agent harmony in your projects

The agentic-AI landscape continues to evolve at a staggering rate, and practitioners are finding it increasingly challenging to keep multiple agents on task even as they criss-cross each other’s workflows.

To help you minimize chaos and maintain inter-agent harmony, we’ve put together a stellar lineup of articles that explore two recently launched tools: Google’s Agent2Agent protocol and Hugging Face’s smolagents framework. Read on to learn how you can leverage them in your own cutting-edge projects.


 

 

The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI — from papers.ssrn.com by Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, and Terrence Sejnowski; via George Siemens

Abstract
In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities.
This chapter offers the first neuroscience-based explanation for the observed reversal of the Flynn Effect—the recent decline in IQ scores in developed countries—linking this downturn to shifts in educational practices and the rise of cognitive offloading via AI and digital tools. Drawing on insights from neuroscience, cognitive psychology, and learning theory, we explain how underuse of the brain’s declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility. Finally, we outline policy implications for education, workforce development, and the responsible integration of AI, advocating strategies that harness technology as a complement to – rather than a replacement for – robust human knowledge.

Keywords
cognitive offloading, memory, neuroscience of learning, declarative memory, procedural memory, generative AI, Flynn Effect, education reform, schemata, digital tools, cognitive load, cognitive architecture, reinforcement learning, basal ganglia, working memory, retrieval practice, schema theory, manifolds

 

Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.

Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.

  • AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
  • ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
  • ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.

Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.

The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.

 

Also see Meeker’s actual report at:

Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey



The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.

The details:

  • Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
  • The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
  • The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
  • Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.

Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.

 

Scientific breakthrough: artificial blood for all blood groups — from getsuperintel.com by Kim “Chubby” Isenberg
Japan’s universal artificial blood could revolutionize emergency medicine and global healthcare resilience.

They all show that we are on the threshold of a new era – one in which technological systems are no longer just tools, but independent players in medical, cognitive and infrastructural change.

This paradigm shift means that AI will no longer be limited to static training data, but will learn through open exploration, similar to biological organisms. This is nothing less than the beginning of an era of autonomous cognition.


From DSC:
While there are some promising developments involving AI these days, we need to look at what the potential downsides might be of AI becoming independent players, don’t you think? Otherwise, what could possibly go wrong?


 

New tools for ultimate character consistency — from heatherbcooper.substack.com by Heather B. Cooper
Plus simple & effective video prompt tips

We have some new tools for character, objects, and scene consistency with Runway and Midjourney.


Multimodal AI = multi-danger — from theneurondaily.com by Grant Harvey

According to a new report from Enkrypt AI, multimodal models have opened the door to sneakier attacks (like Ocean’s Eleven, but with fewer suits and more prompt injections).

Naturally, Enkrypt decided to run a few experiments… and things escalated quickly.

They tested two of Mistral’s newest models—Pixtral-Large and Pixtral-12B, built to handle words and visuals.

What they found? Yikes:

    • The models are 40x more likely to generate dangerous chemical / biological / nuclear info.
    • And 60x more likely to produce child sexual exploitation material compared to top models like OpenAI’s GPT-4o or Anthropic’s Claude 3.7 Sonnet.

Rise of AI-generated deepfake videos spreads misinformation — from iblnews.org


 

Sam Altman’s Eye-Scanning Orb Is Now Coming to the US — from wired.com by Lauren Goode
At a buzzy event in San Francisco, World announced a series of Apple-like stores, a partnership with dating giant Match Group, and a new mini gadget to scan your eyeballs.

The device-and-app combo scans people’s irises, creates a unique user ID, stores that information on the blockchain, and uses it as a form of identity verification. If enough people adopt the app globally, the thinking goes, it could ostensibly thwart scammers.

The bizarre identity verification process requires that users get their eyeballs scanned, so Tools for Humanity is expanding its physical footprint to make that a possibility.

But World is also a for-profit cryptocurrency company that wants to build a borderless, “globally inclusive” financial network. And its approach has been criticized by privacy advocates and regulators. In its early days, World was explicitly marketing its services to countries with a high percentage of unbanked or underbanked citizens, and offering free crypto as an incentive for people to sign up and have their irises scanned.


From DSC:
If people and governments could be trusted with the level of power a global ID network/service could bring, this could be a great technology. But I could easily see it being abused. Heck, even our own President doesn’t listen to the Judicial Branch of our government! He’s in contempt of court, essentially. But he doesn’t seem to care. 


 
© 2025 | Daniel Christian