AI company Anthropic recently announced that companies worldwide have been attacked by an AI-powered cyber espionage campaign. It is purported to be the first publicly documented case of a cyberattack carried out by an AI model.
According to the research report, around 30 organizations worldwide were affected by the attacks. These included large technology companies, financial institutions, chemical companies, and government agencies. The attack was discovered in mid-September 2025. The hacking group GTG-1002, which is linked to China, is said to be behind the attack campaign.
The attackers allegedly manipulated Anthropic’s AI programming tool Claude Code to launch largely autonomous infiltration attempts, according to Anthropic’s report.
Security community responds
However, security experts have doubts about how autonomous the attacks actually were. Cybersecurity researcher Daniel Card joked in an X post: “This Anthropic thing is marketing guff.”
Furthermore, IT security expert Kevin Beaumont criticized on the Mastodon platform that Anthropic had not published any indicators of compromise (IoCs) of the attacks.
“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” Dan Tentler, founder of the Phobos Group, told Ars Technica. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”
However, this is not the first time Anthropic has reported on such a case. Back in August 2025, the company claimed its AI-powered developer tool Claude Code had already been misused for autonomous cyberattacks.
The original article found on Anthropic AI-powered cyberattack causes a stir | CSO Online Read More