On March 31, 2026, Anthropic, the safety-focused AI company behind Claude, accidentally leaked over 500,000 lines of proprietary source code for its Claude Code tool through a public npm package update. This incident, the second such breach in a year, exposed nearly 2,000 TypeScript files via a misincluded debugging file in version 2.1.88, which linked to a publicly accessible zip archive on Anthropic's Cloudflare storage.Security researcher Chaofan Shou quickly spotted the error, sparking rapid mirroring on GitHub where repositories amassed thousands of stars before takedowns.
The leak revealed Claude Code's full architecture, including 44 feature flags for unreleased capabilities like a "persistent assistant" that runs in the background even when users are inactive. Other hidden gems included session review for performance improvement across conversations, remote control from mobile devices, and a roadmap toward longer autonomous tasks, enhanced memory, and multi-agent collaboration. Developers also uncovered internal tools, prompts, and even a "pet system" codenamed Buddy with species and rarity tiers, hinting at gamified enterprise features.
Anthropic swiftly responded, calling it "human error" in a release packaging issue, not a security breach, with no sensitive data exposed. The company issued over 8,000 DMCA takedown requests to platforms like GitHub, removing thousands of forks within days. Claude Code creator Boris Cherny confirmed a skipped manual deploy step caused the mishap, and Anthropic pledged process improvements to prevent recurrence.
This incident underscores vulnerabilities in AI firms' deployment pipelines, especially for a lab positioning itself as security-conscious amid IPO preparations. Competitors now gain insights into production-grade AI coding agents, potentially accelerating their own developments in agent orchestration and tools. While unlikely to derail Anthropic's $340 billion valuation, it highlights how securing AI systems rivals defending against AI-powered threats.
Ultimately, the Claude Code leak serves as a stark reminder for the AI industry to fortify internal safeguards as innovations race ahead. It boosts hype around Anthropic's capabilities while exposing the human element in high-stakes tech releases. As external developers reverse-engineer remnants, the focus shifts to ethical use and robust verification in open-source ecosystems.