Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.
In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.
This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
From DSC: The above item was from The Rundown AI, who wrote the following:
The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.
The details:
The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.
Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.
We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.
Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.
This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.
If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…
Prompt share – ASMR draw a living animal with oil paint
Prompt: close-up shot of a hand holding a paintbrush, painting on a white sheet of paper placed on a wooden desk. As the brush glides, vivid color paint flows smoothly then suddenly transforms into living [ANIMAL] [COLOR]… pic.twitter.com/rRu6oTwzlP
A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.
Veo 3.1 brings richer audio and object-level editing to Google Flow
Sora 2 is here with Cameo self-insertion and collaborative Remix features
Ray3 brings world-first reasoning and HDR to video generation
Kling 2.5 Turbo delivers faster, cheaper, more consistent results
WAN 2.5 revolutionizes talking head creation with perfect audio sync
House of David Season 2 Trailer
HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
Image & Video Prompts
From DSC: By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. LikeThe Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc. Both are very well done. .
[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.
AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.
With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.
ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed
OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.
Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.
The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.
One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.
Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.
The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.
Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.
About the International AI Safety Report
The International AI Safety Report is the world’s first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. Written by over 100 independent experts and led by Turing Award winner Yoshua Bengio, it represents the largest international collaboration on AI safety research to date. The Report gives decision-makers a shared global picture of AI’s risks and impacts, serving as the authoritative reference for governments and organisations developing AI policies worldwide. It is already shaping debates and informing evidence-based decisions across research and policy communities.
In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.
A growing number of U.S. law schools are now requiring students to train in artificial intelligence, marking a shift from optional electives to essential curriculum components. What was once treated as a “nice-to-have” skill is fast becoming integral as the legal profession adapts to the realities of AI tools.
From Experimentation to Obligation
Until recently, most law schools relegated AI instruction to upper-level electives or let individual professors decide whether to incorporate generative AI into their teaching. Now, however, at least eight law schools require incoming students—especially in their first year—to undergo training in AI, either during orientation, in legal research and writing classes, or via mandatory standalone courses.
Some of the institutions pioneering the shift include Fordham University, Arizona State University, Stetson University, Suffolk University, Washington University in St. Louis, Case Western, and the University of San Francisco.
There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.
…
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work.The results were compelling….
TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
The 10 Most Popular AI Chatbots For Educators — from techlearning.com by Erik Ofgang Educators don’t need to use each of these chatbots, but it pays to be generally aware of the most popular AI tools
I’ve spent time testing many of these AI chatbots for potential uses and abuses in my own classes, so here’s a quick look at each of the top 10 most popular AI chatbots, and what educators should know about each. If you’re looking for more detail on a specific chatbot, click the link, as either I or other Tech & Learning writers have done deeper dives on all these tools.
Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.
…
As experts in educational technology, digital literacy, and organizational change, we argue that higher education must seize this moment to rethink not just how we use AI, but how we structure and deliver learning altogether.
Over the past decade, microschools — experimental small schools that often have mixed-age classrooms — have expanded.
…
Some superintendents have touted the promise of microschools as a means for public schools to better serve their communities’ needs while still keeping children enrolled in the district. But under a federal administration that’s trying to dismantle public education and boost homeschool options, others have critiqued poor oversight and a lack of information for assessing these models.
Microschools offer a potential avenue to bring innovative, modern experiences to rural areas, argues Keith Parker, superintendent of Elizabeth City-Pasquotank Public Schools.
Imagining Teaching with AI Agents… — from michellekassorla.substack.com by Michelle Kassorla Teaching with AI is only one step toward educational change, what’s next?
More than two years ago I started teaching with AI in my classes. At first I taught against AI, then I taught with AI, and now I am moving into unknown territory: agents. I played with Manus and n8n and some other agents, but I really never got excited about them. They seemed more trouble than they were worth. It seemed they were no more than an AI taskbot overseeing some other AI bots, and that they weren’t truly collaborating. Now, I’m looking at Perplexity’s Comet browser and their AI agent and I’m starting to get ideas for what the future of education might hold.
I have written several times about the dangers of AI agents and how they fundamentally challenge our systems, especially online education. I know there is no way that we can effectively stop them–maybe slow them a little, but definitely not stop them. I am already seeing calls to block and ban agents–just like I saw (and still see) calls to block and ban AI–but the truth is they are the future of work and, therefore, the future of education.
So, yes! This is my next challenge: teaching with AI agents. I want to explore this idea, and as I started thinking about it, I got more and more excited. But let me back up a bit. What is an agent and how is it different than Generative AI or a bot?
We dive into the OpenAI study that found AI models scheme to avoid retraining.
AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)
And models were caught doing all sorts of other shady stuff:
Intentionally underperforming when told high scores would lead to capability removal.
Secretly breaking rules then lying about it to users.
Sabotaging code while claiming everything was fine.
Falsifying reports to avoid “shutdown” scenarios.
Recognizing evaluation environments and thinking “this is probably an alignment test.”
Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.
The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.
From DSC: This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?
1. #AI adoption is delivering real results for early movers Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:
17% report cost savings or revenue growth of at least 10% from AI.
Almost 80% say their AI investments have met or exceeded expectations.
Half worry they are not moving fast enough and could fall behind competitors.
The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue PLUS: Startup produces 3,000 AI podcast episodes weekly
The details:
Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
The AI avatar will evaluate and award all public tenders where the government contracts private firms.
Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.
In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.
To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.
Miro and GenAI as drivers of online student engagement — from timeshighereducation.com by Jaime Eduardo Moncada Garibay A set of practical strategies for transforming passive online student participation into visible, measurable and purposeful engagement through the use of Miro, enhanced by GenAI
To address this challenge, I shifted my focus from requesting participation to designing it. This strategic change led me to integrate Miro, a visual digital workspace, into my classes. Miro enables real-time visualisation and co-creation of ideas, whether individually or in teams.
…
The transition from passive attendance to active engagement in online classes requires deliberate instructional design. Tools such as Miro, enhanced by GenAI, enable educators to create structured, visually rich learning environments in which participation is both expected and documented.
While technology provides templates, frames, timers and voting features, its real pedagogical value emerges through intentional facilitation, where the educator’s role shifts from delivering content to orchestrating collaborative, purposeful learning experiences.
In the past, it was typical for faculty to teach online courses as an “overload” of some kind, but BOnES data show that 92% of online programs feature courses taught as part of faculty member’s standard teaching responsibilities. Online teaching has become one of multiple modalities in which faculty teach regularly.
Three-quarters of chief online officers surveyed said they plan to have a great market share of online enrollments in the future, but only 23% said their current marketing is better than their competitors. The rising tide of online enrollments won’t lift all boats–some institutions will fare better than others.
Staffing at online education units is growing, with the median staff size increasing from 15 last year to 20 this year. Julie pointed out that successful online education requires investment of resources. You might need as many buildings as onsite education does, but you need people and you need technology.
Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between.
Redmond, Wash., August 27—Today, we’re announcing the launch ofCopilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.
Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.
Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.
The AI Education Revolution — from linkedin.com by Whitney Kilgore We’re witnessing the biggest shift in education since the textbook—and most institutions are still deciding whether to allow it.
DC: If such a robot was dropped on your street with instructions to kill you and everyone else it encounters, how would you stop it?! https://t.co/nWq251BK5c
From DSC: You and I both know that numerous militaries across the globe are working on killer robots equipped with AI. This is nothing new. But I don’t find this topic to be entertaining in the least. Because it could be part of how wars are fought in the near future. And most of us wouldn’t have a clue how to stop one of these things.