Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 

2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

International AI Safety Report — from internationalaisafetyreport.org

About the International AI Safety Report
The International AI Safety Report is the world’s first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. Written by over 100 independent experts and led by Turing Award winner Yoshua Bengio, it represents the largest international collaboration on AI safety research to date. The Report gives decision-makers a shared global picture of AI’s risks and impacts, serving as the authoritative reference for governments and organisations developing AI policies worldwide. It is already shaping debates and informing evidence-based decisions across research and policy communities.

 

The State of AI Report 2025 — from nathanbenaich.substack.com by Nathan Benaich

In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.

stateof.ai
.

 


 

OpenAI’s research on AI scheming, explained — from theneurondaily.com by Grant Harvey
PLUS: Google PayPal on agent payments?!

  • We dive into the OpenAI study that found AI models scheme to avoid retraining.
  • AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)

And models were caught doing all sorts of other shady stuff:

  • Intentionally underperforming when told high scores would lead to capability removal.
  • Secretly breaking rules then lying about it to users.
  • Sabotaging code while claiming everything was fine.
  • Falsifying reports to avoid “shutdown” scenarios.
  • Recognizing evaluation environments and thinking “this is probably an alignment test.”

Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.

The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.


From DSC:
This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?


 

3 Work Trends – Issue 87 — from the World Economic Forum

1. #AI adoption is delivering real results for early movers
Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:

  • 17% report cost savings or revenue growth of at least 10% from AI.
  • Almost 80% say their AI investments have met or exceeded expectations.
  • Half worry they are not moving fast enough and could fall behind competitors.

The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue
PLUS: Startup produces 3,000 AI podcast episodes weekly

The details:

  • Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
  • The AI avatar will evaluate and award all public tenders where the government contracts private firms.
  • Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
  • Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.

The Rundown AI’s article links to:


Anthropic Economic Index report: Uneven geographic and enterprise AI adoption — from anthropic.com

In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.

To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.


How human-centric AI can shape the future of work — from weforum.org by Sapthagiri Chapalapalli

  • Last year, use of AI in the workplace increased by 5.5% in Europe alone.
  • AI adoption is accelerating, but success depends on empowering people, not just deploying technology.
  • Redesigning roles and workflows to combine human creativity and critical thinking with AI-driven insights is key.

The transformative potential of AI on business

Organizations are having to rapidly adapt their business models. Image: TCS


Using ChatGPT to get a job — from linkedin.com by Ishika Rawat

 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 
 

From DSC:
You and I both know that numerous militaries across the globe are working on killer robots equipped with AI. This is nothing new. But I don’t find this topic to be entertaining in the least. Because it could be part of how wars are fought in the near future. And most of us wouldn’t have a clue how to stop one of these things.

 

Digital Accessibility in 2025: A Screen Reader User’s Honest Take — from blog.usablenet.com by Michael Taylor

In this post, part of the UsableNet 25th anniversary series, I’m taking a look at where things stand in 2025. I’ll discuss the areas that have improved—such as online shopping, banking, and social media—and the ones that still make it challenging to perform basic tasks, including travel, healthcare, and mobile apps. I hope that by sharing what works and what doesn’t, I can help paint a clearer picture of the digital world as it stands today.


Why EAA Compliance and Legal Trends Are Shaping Accessibility in 2025 — from blog.usablenet.com by Jason Taylor

On June 28, 2025, the European Accessibility Act (EAA) officially became enforceable across the European Union. This law requires digital products and services—including websites, mobile apps, e-commerce platforms, and software to meet the defined accessibility standards outlined in EN 301 549, which aligns with the WCAG 2.1 Level AA.

Companies that serve EU consumers must be able to demonstrate that accessibility is built into the design, development, testing, and maintenance of their digital products and services.

This milestone also arrives as UsableNet celebrates 25 years of accessibility leadership—a moment to reflect on how far we’ve come and what digital teams must do next.

 

The US AI Action Plan, Explained — from theneurondaily.com by Grant Harvey
Sam’s 3 AI nightmares, Google hits 2B users, and Trump bans “woke” AI…

Meanwhile, at the Fed’s banking conference on Wednesday, Altman revealed his three nightmare AI scenarios. The first two were predictable: bad actors getting superintelligence first, and the classic “I’m afraid I can’t do that, Dave” situation.

But the third? AI accidentally steering us off course while we just…go along with it.

His example hit home: young people who can’t make decisions without ChatGPT (according to Sam, this is literally a thing). See, even when AI gives great advice, collectively handing over all decision-making feels “bad and dangerous” (even to Sam, who MADE this thing).

So yeah, Sam’s not really worried about the AI rebelling. He’s worried about AI becoming so good that we stop thinking for ourselves—and that might be scarier.

Also from The Neuron re: the environmental impacts of producing/offering AI:

 

Teach business students to write like executives — from timeshighereducation.com by José Ignacio Sordo Galarza
Many business students struggle to communicate with impact. Teach them to pitch ideas on a single page to build clarity, confidence and work-ready communication skills

Many undergraduate business students transition into the workforce equipped with communication habits that, while effective in academic settings, prove ineffective in professional environments. At university, students are trained to write for professors, not executives. This becomes problematic in the workplace where lengthy reports and academic jargon often obscure rather than clarify intent. Employers seek ideas they can absorb in seconds. This is where the one-pager – a single-page, high-impact document that helps students develop clarity of thought, concise expression and strategic communication – proves effective.


Also from Times Higher Education, see:


Is the dissertation dead? If so, what are the alternatives? — from timeshighereducation.com by Rushana Khusainova, Sarah Sholl, & Patrick Harte
Dissertation alternatives, such as capstone projects and applied group-based projects, could better prepare graduates for their future careers. Discover what these might look like

The traditional dissertation, a longstanding pillar of higher education, is facing increasing scrutiny. Concerns about its relevance to contemporary career paths, limitations in fostering practical skills and the changing nature of knowledge production in the GenAI age have fuelled discussions about its continued efficacy. So, is the dissertation dead?

The dissertation is facing a number of challenges. It can be perceived as having little relevance to career aspirations in increasingly competitive job markets. According to The Future of Jobs Report 2025 by the World Economic Forum, employers demand and indeed prioritise skills such as collaborative problem-solving in diverse and complex contexts, which a dissertation might not demonstrate.

 

 

Multiple Countries Just Issued Travel Warnings for the U.S. — from mensjournal.com by Rachel Dillin
In a rare reversal, several of America’s closest allies are now warning their citizens about traveling to the U.S., and it could impact your next trip.

For years, the U.S. has issued cautionary travel advisories to citizens heading overseas. But in a surprising twist, the roles have flipped. Several countries, including longtime allies like Australia, Canada, and the U.K., are now warning their citizens about traveling to the United States, according to Yahoo.

Australia updated its advisory in June, flagging gun violence, civil protests, and unpredictable immigration enforcement. While its guidance remains at Level 1 (“exercise normal safety precautions”), Australian officials urged travelers to stay alert in crowded places like malls, transit hubs, and public venues. They also warned about the Visa Waiver Program, noting that U.S. authorities can deny entry without explanation.

From DSC:
I’ve not heard of a travel warning against the U.S. in my lifetime. Thanks Trump. Making America Great Again. Sure thing….

 

Photos by Charly Broyez and Laurent Kronental Celebrate Architecture Ahead of Its Time — from thisiscolossal.com by Charly Broyez, Laurent Kronental, and Kate Mothes

 
© 2025 | Daniel Christian