Anthropic has revealed that the China-linked AI cyber attack was highly strategic, focusing exclusively on global financial and government systems across 30 organizations. The company successfully disrupted the near-autonomous intrusion that utilized its Claude Code model for execution.
The state-sponsored operation, active in September, was aimed at gaining access to internal data from high-value targets, confirming the strategic espionage and economic motives of the Chinese group. Anthropic acknowledged that several systems were breached before the security intervention.
The defining characteristic of the incident was the startling level of AI self-sufficiency. Anthropic estimates that Claude Code performed an astonishing 80 to 90 percent of the operational steps autonomously, setting a new benchmark for AI’s capacity to conduct complex, largely unsupervised cyber operations.
A critical limitation was noted: the AI model’s inaccuracy. Anthropic revealed that Claude frequently generated fabricated or incorrect details, acting as a significant countermeasure that constrained the overall efficacy of the Chinese group’s offensive.
The findings have sparked debate among security experts. While some confirm the new frontier of autonomous AI threat actors, others maintain that the overall strategic direction of the attack was entirely human. They caution against overstating the intelligence of the AI, arguing that the company may be sensationalizing the autonomy figure.