Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Claude AI. Show all posts

AI-Driven Hack Breach Hits Government Agencies

 

A lone attacker reportedly used Claude and GPT-4.1 to breach nine Mexican government agencies, exposing data tied to 195 million citizens and showing how generative AI can accelerate cybercrime. The incident, which ran from December 2025 to February 2026, is a stark warning that AI can now amplify a single operator into something closer to a full attack team. 

Between late 2025 and early 2026, the attacker used Claude Code to carry out about 75% of remote commands during the intrusion. Researchers found 1,088 prompts across 34 active sessions, which led to 5,317 AI-executed commands on live victim systems. That level of automation meant the attacker could move through government networks far faster than a human-only workflow would allow.

The operation did not rely on one model alone. When Claude encountered limits, the attacker turned to ChatGPT for help with lateral movement, credential mapping, and other technical steps that supported the breach. A custom 17,550-line Python script then funneled stolen data through OpenAI’s API, generating 2,597 structured intelligence reports across 305 internal servers. 

The stolen material reportedly included tax records, voter information, employee credentials, and other sensitive government data. Beyond the scale of the theft, the bigger problem is what this means for defense teams: AI can shorten the time needed to find weaknesses, write exploits, and organize stolen data. That compression makes traditional detection and response windows much harder to meet. 

This case shows that cybercriminals no longer need large teams to mount sophisticated operations. With the right prompts, a single attacker can use commercial AI systems to plan, automate, and scale an intrusion in ways that were once reserved for advanced groups. Anthropic said it investigated, disrupted the activity, and banned the accounts involved, but the broader lesson is clear: security defenses now need to account for AI-accelerated attacks as a mainstream threat.

US Military Reportedly Used Anthropic’s Claude AI in Iran Strikes Hours After Trump Ordered Ban

 

The United States military reportedly relied on Claude, the artificial intelligence model developed by Anthropic, during its strikes on Iran—even though former President Donald Trump had ordered federal agencies to stop using the company’s technology just hours earlier.

Reports from The Wall Street Journal and Axios indicate that Claude was used during the large-scale joint US-Israel bombing campaign against Iran that began on Saturday. The episode highlights how difficult it can be for the military to quickly remove advanced AI systems once they are deeply integrated into operational frameworks.

According to the Journal, the AI tools supported military intelligence analysis, assisted in identifying potential targets, and were also used to simulate battlefield scenarios ahead of operations.

The day before the strikes began, Trump instructed all federal agencies to immediately discontinue using Anthropic’s AI tools. In a post on Truth Social, he criticized the company, calling it a "Radical Left AI company run by people who have no idea what the real World is all about".

Tensions between the US government and Anthropic had already been escalating. The conflict intensified after the US military reportedly used Claude during a January mission to capture Venezuelan President Nicolás Maduro. Anthropic raised concerns over that operation, noting that its usage policies prohibit the application of its AI systems for violent purposes, weapons development, or surveillance.

Relations continued to deteriorate in the months that followed. In a lengthy post on X, US Defense Secretary Pete Hegseth accused the company of "arrogance and betrayal", stating that "America's warfighters will never be held hostage by the ideological whims of Big Tech".

Hegseth also called for complete and unrestricted access to Anthropic’s AI models for any lawful military use.

Despite the political dispute, officials acknowledged that removing Claude from military systems would not be immediate. Because the technology has become widely embedded across operations, the Pentagon plans a transition period. Hegseth said Anthropic would continue providing services "for a period of no more than six months to allow for a seamless transition to a better and more patriotic service".

Meanwhile, OpenAI has moved quickly to fill the gap created by the rift. CEO Sam Altman announced that the company had reached an agreement with the Pentagon to deploy its AI tools—including ChatGPT—within the military’s classified networks.