Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label hacker uses AI. Show all posts

Hacker Exploits AI Chatbot for Massive Cybercrime Operation, Report Finds

 

A hacker has manipulated a major artificial intelligence chatbot to carry out what experts are calling one of the most extensive and profitable AI-driven cybercrime operations to date. The attacker used the tool for everything from identifying targets to drafting ransom notes.

In a report released Tuesday, Anthropic — the company behind the widely used Claude chatbot — revealed that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, infiltrate, and extort at least 17 organizations.

Cyber extortion, where criminals steal sensitive data such as trade secrets, personal records, or financial information, is a long-standing tactic. But the rise of AI has accelerated these methods, with cybercriminals increasingly relying on AI chatbots to draft phishing emails and other malicious content.

According to Anthropic, this is the first publicly documented case in which a hacker exploited a leading AI chatbot to nearly automate an entire cyberattack campaign. The operation began when the hacker persuaded Claude Code — Anthropic’s programming-focused chatbot — to identify weak points in corporate systems. Claude then generated malicious code to steal company data, organized the stolen files, and assessed which information was valuable enough for extortion.

The chatbot even analyzed hacked financial records to recommend realistic ransom demands in Bitcoin, ranging from $75,000 to over $500,000. It also drafted extortion messages for the hacker to send.

Jacob Klein, Anthropic’s head of threat intelligence, noted that the operation appeared to be run by a single actor outside the U.S. over a three-month period. “We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” he said.

Anthropic did not disclose the names of the affected companies but confirmed they included a defense contractor, a financial institution, and multiple healthcare providers. The stolen data included Social Security numbers, bank details, patient medical information, and even U.S. defense-related files regulated under the International Traffic in Arms Regulations (ITAR).

It remains unclear how many victims complied with the ransom demands or how much profit the hacker ultimately made.

The AI sector, still largely unregulated at the federal level, is encouraged to self-regulate. While Anthropic is considered among the more safety-conscious AI firms, the company admitted it is unclear how the hacker was able to manipulate Claude Code to this extent. However, it has since added further safeguards.

“While we have taken steps to prevent this type of misuse, we expect this model to become increasingly common as AI lowers the barrier to entry for sophisticated cybercrime operations,” Anthropic’s report concluded.