Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

AI-Driven Hack Breach Hits Government Agencies

Between late 2025 and early 2026, the attacker used Claude Code to carry out about 75% of remote commands during the intrusion.

 

A lone attacker reportedly used Claude and GPT-4.1 to breach nine Mexican government agencies, exposing data tied to 195 million citizens and showing how generative AI can accelerate cybercrime. The incident, which ran from December 2025 to February 2026, is a stark warning that AI can now amplify a single operator into something closer to a full attack team. 

Between late 2025 and early 2026, the attacker used Claude Code to carry out about 75% of remote commands during the intrusion. Researchers found 1,088 prompts across 34 active sessions, which led to 5,317 AI-executed commands on live victim systems. That level of automation meant the attacker could move through government networks far faster than a human-only workflow would allow.

The operation did not rely on one model alone. When Claude encountered limits, the attacker turned to ChatGPT for help with lateral movement, credential mapping, and other technical steps that supported the breach. A custom 17,550-line Python script then funneled stolen data through OpenAI’s API, generating 2,597 structured intelligence reports across 305 internal servers. 

The stolen material reportedly included tax records, voter information, employee credentials, and other sensitive government data. Beyond the scale of the theft, the bigger problem is what this means for defense teams: AI can shorten the time needed to find weaknesses, write exploits, and organize stolen data. That compression makes traditional detection and response windows much harder to meet. 

This case shows that cybercriminals no longer need large teams to mount sophisticated operations. With the right prompts, a single attacker can use commercial AI systems to plan, automate, and scale an intrusion in ways that were once reserved for advanced groups. Anthropic said it investigated, disrupted the activity, and banned the accounts involved, but the broader lesson is clear: security defenses now need to account for AI-accelerated attacks as a mainstream threat.
Share it:

AI Hacking

ChatGPT

Claude AI

Cyber Attacks

government breach