Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Claude Code. Show all posts

Claude Code Bugs Enable Remote Code Execution and API Key Theft

 

Claude Code, the coding assistant developed by Anthropic, is in the news after three major vulnerabilities were discovered, which can allow remote code execution and the theft of API keys if the developer opens an untrusted project. The vulnerabilities, discovered by Check Point researchers Aviv Donenfeld and Oded Vanunu, take advantage of the way in which Claude Code deals with configuration features such as Hooks, Model Context Protocol (MCP) servers, and environment variables, which can turn project files into an attack vector. 

The first bug is a high-severity vulnerability, rated 8.7 on the Common Vulnerability Scoring System (CVSS), though it doesn’t have a CVE number. The flaw is related to the bypassing of user consent when the attacker starts the project in an untrusted directory. Using the hooks defined in the repository’s .claude/settings.json, an attacker with commit access can add shell commands in the project, which can be automatically executed when the project is opened in the victim’s environment. In essence, an attacker can execute remote code execution without the need for further user interaction. All the attacker needs to do is ask the victim to open the malicious project, and the attacker can execute the hidden command in the background. 

The second vulnerability, tracked as CVE-2025-59536 and also rated 8.7, extends this risk by targeting Claude Code’s integration with external tools via MCP. Here, attackers can weaponize repository-controlled configuration files like .mcp.json and claude/settings.json to override explicit user approval, for example by enabling the “enableAllProjectMcpServers” option, causing arbitrary shell commands to run automatically when the tool initializes. This effectively transforms the normal startup process into a trigger point for remote code execution from an attacker-controlled configuration. 

The third flaw, CVE-2026-21852, is an information disclosure bug rated 5.3 that affects Claude Code’s project-load flow.By manipulating settings so that ANTHROPIC_BASE_URL points to an attacker-controlled endpoint, a malicious repository can cause Claude Code to send API requests, including the user’s Anthropic API key, before any trust prompt is displayed. As a result, simply opening a crafted repository can leak active API credentials, allowing adversaries to redirect authenticated traffic, steal keys, and pivot deeper into an organization’s AI infrastructure.

Anthropic has patched all three issues, with fixes rolled out across versions 1.0.87, 1.0.111, and 2.0.65 between September 2025 and January 2026, and has published advisories detailing the impact and mitigations. Nonetheless, the incident underscores how AI coding assistants introduce new supply-chain attack surfaces by trusting project-level configuration files, and it highlights the need for developers to treat untrusted repositories with the same caution as untrusted code, keeping tools updated and reviewing configuration behavior closely.

Anthropic Launches Claude Code Security To Autonomously Detect And Patch Bugs

 

Anthropic has introduced Claude Code Security, a new AI-powered capability in its Claude Code assistant that promises to raise the bar for software security by scanning entire codebases for vulnerabilities and suggesting human-reviewed patches. The feature is currently rolling out in a limited research preview for Enterprise and Team customers, reflecting Anthropic’s cautious approach to deploying advanced cybersecurity tools. By positioning this as a defender-focused technology, the company aims to counter the same AI-driven techniques that attackers are starting to use to automate vulnerability discovery at scale.

Unlike traditional static analysis tools that rely on rule-based pattern matching and known vulnerability signatures, Claude Code Security analyzes code more like a human security researcher. It reasons about how different components interact, traces data flows through the application, and flags subtle issues that conventional scanners often miss. This deeper contextual understanding is designed to surface complex and high-severity bugs that may have remained hidden despite years of manual and automated review. 

Each issue identified by Claude Code Security goes through a multi-stage verification process intended to filter out false positives before results ever reach a security analyst. The system re-examines its own findings, attempts to prove or disprove them, and assigns both severity and confidence ratings so teams can prioritize the most critical fixes. All results are presented in a dedicated dashboard, where developers and security teams can inspect the affected code, review the suggested patches, and decide how to remediate. Anthropic emphasizes a human-in-the-loop model, ensuring that nothing is changed without explicit developer approval.

Claude Code Security builds on more than a year of research into Anthropic’s cybersecurity capabilities, including testing in capture-the-flag competitions and collaborations with partners such as Pacific Northwest National Laboratory. Using its latest Claude Opus 4.6 model, Anthropic reports that it has already uncovered more than 500 long-standing vulnerabilities in production open-source projects, many of which had survived decades of expert scrutiny. Those findings are now going through triage and responsible disclosure with maintainers, reinforcing the tool’s emphasis on real-world impact and careful rollout. 

Anthropic sees this launch as part of a broader shift in the cybersecurity landscape, where AI will routinely scan a significant share of the world’s code for flaws. The company warns that attackers will increasingly use similar models to find exploitable weaknesses faster than ever, but argues that defenders who move quickly can seize the same advantages to harden their systems in advance. By making Claude Code Security available first to enterprises, teams, and open-source maintainers, Anthropic is betting that AI-augmented defenders can keep pace with, and potentially outmaneuver, AI-empowered adversaries.