Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Anthropic AI Model Finds 22 Security Flaws in Firefox

Anthropic says its Claude Opus 4.6 AI model helped identify 22 vulnerabilities in Firefox, including 14 high-severity bugs.

 

Anthropic said its artificial intelligence model Claude Opus 4.6 helped uncover 22 previously unknown security vulnerabilities in the Firefox web browser as part of a collaboration with the Mozilla. 

The company said the issues were discovered during a two week analysis conducted in January 2026. 

The findings include 14 vulnerabilities rated as high severity, seven categorized as moderate and one considered low severity. 

Most of the flaws were addressed in Firefox version 148, which was released late last month, while the remaining fixes are expected in upcoming updates. 

Anthropic said the number of high severity bugs discovered by its AI model represents a notable share of the browser’s serious vulnerabilities reported over the past year. 

During the research, Claude Opus 4.6 scanned roughly 6,000 C++ files in the Firefox codebase and generated 112 unique vulnerability reports. 

Human researchers reviewed the results to confirm the findings and rule out false positives before reporting them. One issue identified by the model involved a use-after-free vulnerability in Firefox’s JavaScript engine. 

According to Anthropic, the AI located the flaw within about 20 minutes of examining the code, after which a security researcher validated the finding in a controlled testing environment. 

Researchers also tested whether the AI model could go beyond identifying flaws and attempt to build exploits from them. Anthropic said it provided Claude access to the list of vulnerabilities reported to Mozilla and asked it to develop working exploits. 

After hundreds of test runs and about $4,000 worth of API usage, the model succeeded in producing a working exploit in only two cases. 

Anthropic said the results suggest that finding vulnerabilities may be easier for AI systems than turning those flaws into functioning exploits. 

“However, the fact that Claude could succeed at automatically developing a crude browser exploit, even if only in a few cases, is concerning,” the company said. 

It added that the exploit tests were performed in a restricted research environment where some protections, such as sandboxing, were deliberately removed. 

One exploit generated by the model targeted a vulnerability tracked as CVE-2026-2796, which involves a miscompilation issue in the JavaScript WebAssembly component of Firefox’s just-in-time compilation system. 

Anthropic said the testing process included a verification system designed to check whether the AI-generated exploit actually worked. 

The system provided real-time feedback, allowing the model to refine its attempts until it produced a functioning proof of concept. The research comes shortly after Anthropic introduced Claude Code Security in a limited preview. 

The tool is designed to help developers identify and fix software vulnerabilities with the assistance of AI agents. Mozilla said in a separate statement that the collaboration produced additional findings beyond the 22 vulnerabilities. 

According to the company, the AI-assisted analysis uncovered about 90 other bugs, including assertion failures typically identified through fuzzing as well as logic errors that traditional testing tools had missed. 

“The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement,” Mozilla said. 

“We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition to security engineers’ toolbox.”
Share it:

Android

Anthropic

Artificial Inteligence

Firefox

malware