Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 

How Coworking Spaces Are Becoming The Learning Ecosystems Of The Future — from hrfuture.net

What if your workspace helped you level up your career? Coworking spaces are becoming learning hubs where skills grow, ideas connect, and real-world education fits seamlessly into the workday.

Continuous learning has become a cornerstone of professional longevity, and flexible workspaces already encourage it through workshops, talks, and mentoring. Their true potential, however, may lie in becoming centers of industry-focused education that help professionals stay adaptable in a rapidly changing world of work.
.


.

What if forward-thinking workspaces and coworking centers became hubs of lifelong learning, integrating job-relevant training with accessible, real-world education?

For coworking operators, this raises important questions: Which types of learning thrive best in these environments, and how much do the design and layout of a space influence how people learn?

By exploring these questions and combining innovative programs with cutting-edge technology aligned to the future workforce, could coworking spaces ultimately become the classrooms of tomorrow?

 

…the above posting links to:

Higher Ed Is Sleepwalking Toward Obsolescence— And AI Won’t Be the Cause, Just the Accelerant — from substack.com by Steven Mintz
AI Has Exposed Higher Ed’s Hollow Core — The University Must Reinvent Itself or Fade

It begins with a basic reversal of mindset: Stop treating AI as a threat to be policed. Start treating it as the accelerant that finally forces us to build the education we should have created decades ago.

A serious institutional response would demand — at minimum — six structural commitments:

  • Make high-intensity human learning the norm.  …
  • Put active learning at the center, not the margins.  …
  • Replace content transmission with a focus on process.  …
  • Mainstream high-impact practices — stop hoarding them for honors students.  …
  • Redesign assessment to make learning undeniable.  …

And above all: Instructional design can no longer be a private hobby.


Teaching with AI: From Prohibition to Partnership for Critical Thinking — from facultyfocus.com by Michael Kiener, PhD, CRC

How to Integrate AI Developmentally into Your Courses

  • Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
  • Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
  • Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
  • Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.

Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.




80 per cent of young people in the UK are using AI for their schoolwork — from aipioneers.org by Graham Attwell

What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

International AI Safety Report — from internationalaisafetyreport.org

About the International AI Safety Report
The International AI Safety Report is the world’s first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. Written by over 100 independent experts and led by Turing Award winner Yoshua Bengio, it represents the largest international collaboration on AI safety research to date. The Report gives decision-makers a shared global picture of AI’s risks and impacts, serving as the authoritative reference for governments and organisations developing AI policies worldwide. It is already shaping debates and informing evidence-based decisions across research and policy communities.

 

Eye implant and high-tech glasses restore vision lost to age — from newscientist.com by Chris Simms
Age-related macular degeneration is a common cause of vision loss, with existing treatments only able to slow its progression. But now an implant in the back of the eye and a pair of high-tech glasses have enabled people with the condition to read again

People with severe vision loss have been able to read again, thanks to a tiny wireless chip implanted in one of their eyes and a pair of high-tech glasses.

“This is an exciting and significant study,” says Francesca Cordeiro at Imperial College London. “It gives hope for providing vision in patients for whom this was more science fiction than reality.”

 

 

The State of AI Report 2025 — from nathanbenaich.substack.com by Nathan Benaich

In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.

stateof.ai
.

 


 

On LawNext: Justice Workers — Reimagining Access to Justice as Democracy Work, with Rebecca Sandefur and Matthew Burnett — from lawnext.com by Bob Ambrogi

With as many as 120 million legal problems going unresolved in America each year, traditional lawyer-centered approaches to access to justice have consistently failed to meet the scale of need. But what if the solution is not just about providing more legal services — what if it lies in fundamentally rethinking who can provide legal help?

In today’s episode, host Bob Ambrogi is joined by two of the nation’s leading researchers on access to justice: Rebecca Sandefur, professor and director of the Sanford School of Social and Family Dynamics at Arizona State University and a faculty fellow at the American Bar Foundation, and Matthew Burnett, director of research and programs for the Access to Justice Research Initiative at the American Bar Foundation and an adjunct professor of law at Georgetown University Law Center.

 

3 Work Trends – Issue 87 — from the World Economic Forum

1. #AI adoption is delivering real results for early movers
Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:

  • 17% report cost savings or revenue growth of at least 10% from AI.
  • Almost 80% say their AI investments have met or exceeded expectations.
  • Half worry they are not moving fast enough and could fall behind competitors.

The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue
PLUS: Startup produces 3,000 AI podcast episodes weekly

The details:

  • Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
  • The AI avatar will evaluate and award all public tenders where the government contracts private firms.
  • Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
  • Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.

The Rundown AI’s article links to:


Anthropic Economic Index report: Uneven geographic and enterprise AI adoption — from anthropic.com

In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.

To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.


How human-centric AI can shape the future of work — from weforum.org by Sapthagiri Chapalapalli

  • Last year, use of AI in the workplace increased by 5.5% in Europe alone.
  • AI adoption is accelerating, but success depends on empowering people, not just deploying technology.
  • Redesigning roles and workflows to combine human creativity and critical thinking with AI-driven insights is key.

The transformative potential of AI on business

Organizations are having to rapidly adapt their business models. Image: TCS


Using ChatGPT to get a job — from linkedin.com by Ishika Rawat

 

Miro and GenAI as drivers of online student engagement — from timeshighereducation.com by Jaime Eduardo Moncada Garibay
A set of practical strategies for transforming passive online student participation into visible, measurable and purposeful engagement through the use of Miro, enhanced by GenAI

To address this challenge, I shifted my focus from requesting participation to designing it. This strategic change led me to integrate Miro, a visual digital workspace, into my classes. Miro enables real-time visualisation and co-creation of ideas, whether individually or in teams.

The transition from passive attendance to active engagement in online classes requires deliberate instructional design. Tools such as Miro, enhanced by GenAI, enable educators to create structured, visually rich learning environments in which participation is both expected and documented.

While technology provides templates, frames, timers and voting features, its real pedagogical value emerges through intentional facilitation, where the educator’s role shifts from delivering content to orchestrating collaborative, purposeful learning experiences.


Benchmarking Online Education with Bruce Etter and Julie Uranis — from buzzsprout.com by Derek Bruff

Here are some that stood out to me:

  • In the past, it was typical for faculty to teach online courses as an “overload” of some kind, but BOnES data show that 92% of online programs feature courses taught as part of faculty member’s standard teaching responsibilities. Online teaching has become one of multiple modalities in which faculty teach regularly.
  • Three-quarters of chief online officers surveyed said they plan to have a great market share of online enrollments in the future, but only 23% said their current marketing is better than their competitors. The rising tide of online enrollments won’t lift all boats–some institutions will fare better than others.
  • Staffing at online education units is growing, with the median staff size increasing from 15 last year to 20 this year. Julie pointed out that successful online education requires investment of resources. You might need as many buildings as onsite education does, but you need people and you need technology.


 
 

How Will AI Affect the Global Workforce? — from goldmansachs.com

  • Despite concerns about widespread job losses, AI adoption is expected to have only a modest and relatively temporary impact on employment levels.
  • Goldman Sachs Research estimates that unemployment will increase by half a percentage point during the AI transition period as displaced workers seek new positions.
  • If current AI use cases were expanded across the economy and reduced employment proportionally to efficiency gains, an estimated 2.5% of US employment would be at risk of related job loss.
  • Occupations with higher risk of being displaced by AI include computer programmers, accountants and auditors, legal and administrative assistants, and customer service representatives.

The Neuron recently highlighted the above item. Here is Grant Harvey’s take on that and other AI-related items:


UK businesses are dialing back hiring for jobs that are likely to be affected by the rollout of artificial intelligence, a study found, suggesting the new technology is accentuating a slowdown in the nation’s labor market. Job vacancies have declined across the board in the UK as employers cut costs in the face of sluggish growth and high borrowing rates, with the overall number of online job postings down 31% in the three months to May compared with the same period in 2022, a McKinsey & Co. analysis found. Tiwa Adebayo joins Stephen Carroll on Bloomberg Radio to discuss the details.


I talked to Sam Altman about the GPT-5 launch fiasco – from theverge.com by Alex Heath
Over dinner, OpenAI CEO’s addressed criticism of GPT-5’s rollout, the AI bubble, brain-computer interfaces, buying Google Chrome, and more.


Sam Altman, over bread rolls, explores life after GPT-5 — from techcrunch.com by Maxwell Zeff

But throughout the night, it becomes clear to me that this dinner is about OpenAI’s future beyond GPT-5. OpenAI’s executives give the impression that AI model launches are less important than they were when GPT-4 launched in 2023. After all, OpenAI is a very different company now, focused on upending legacy players in search, consumer hardware, and enterprise software.

OpenAI shares some new details about those efforts.


 

Partnerships to make higher education work for the workforce — from timeshighereducation.com by Brooke Wilson
Fostering long-term industry partners can enhance student outcomes and prepare them for the workplace of the future. Here’s how to get the best out of them

As the pace of change accelerates across all industries, higher education institutions face increasing pressure to ensure their graduates are prepared for the workplace demands of today – and tomorrow. Cultivating meaningful partnerships with industry is no longer optional; it’s necessary.

From curriculum co-design to experiential learning, universities can collaborate with businesses and industries in several ways to enhance student outcomes and strengthen regional economies.


The keys to strong university–non-profit partnerships — from timeshighereducation.com by Mariana Leyva, Martha Sáenz, and Itzel Eguiluz
Collaborative projects between universities and non-profits nurture empathy and allow students to make a real-world impact. Here, three educators share their tips for building meaningful partnerships that benefit students and communities alike

Collaborative projects between universities and non-profits nurture empathy and allow students to make a real-world impact. Here, three educators share their tips for building meaningful partnerships that benefit students and communities alike.

 
© 2025 | Daniel Christian