Learning brief
Generated by AI from multiple sources. Always verify critical information.
TL;DR
Anthropic accidentally released Claude Code's entire source code — over 500,000 lines — in a software update on March 31, 2026. The code spread instantly across GitHub with 82,000+ forks before Anthropic could contain it. Developers found unreleased features including an always-on AI agent and a Tamagotchi-style coding pet.
What changed
513,000 lines of Claude Code's TypeScript source leaked via npm package, exposing its complete architecture.
Why it matters
The code reveals how Claude actually works, including unreleased features and instructions given to the AI.
What to watch
Whether bad actors exploit exposed security vulnerabilities before Anthropic patches the revealed weaknesses.
What Happened
On March 31, 2026, Anthropic released version 2.1.88 of Claude Code — their AI-powered coding assistant that works in your computer's terminal. The update contained a massive mistake: a 59.8 MB source map file that exposed the tool's entire internal codebase (Source 55, Source 58). Think of a source map like architectural blueprints accidentally left in a delivered building — it showed exactly how everything was built.
The leaked package contained 1,906 files and 513,000 lines of TypeScript code — the programming language used to build Claude Code (Source 58). A security researcher named Chaofan Shou spotted it immediately and posted about it on X (formerly Twitter), where the post got 29 million views (Source 55). Within hours, thousands of developers had downloaded the code from Anthropic's own cloud storage and copied it to GitHub (a platform where programmers share code). By the next day, one GitHub copy had 84,000 stars and 82,000 forks — meaning 82,000 people had made their own copies (Source 58).
Anthropic tried to stop the spread by sending DMCA takedown notices — legal demands to remove copyrighted material — but it was too late. The code had already spread to hundreds of repositories across the internet (Source 58). The company called it "a release packaging issue caused by human error, not a security breach," and emphasized that no customer data was exposed (Source 55, Source 59).
Developers who examined the leaked code found unreleased features Anthropic was working on. These included KAIROS — an always-on background agent that would run continuously (Source 59). They also found plans for a Tamagotchi-style pet that would sit next to the input box and react to your coding, like the 1990s digital pets that needed care and attention (Source 59). The code also revealed the exact instructions Anthropic gives Claude to make it behave in specific ways, plus details about how it stores and recalls information (Source 59).
This is the second leak for Anthropic in recent weeks. The company had another breach where thousands of internal files were stored on publicly accessible systems, including draft blog posts about upcoming AI models (Source 55).
So What?
The real story here is that leaked code is a blueprint for attackers. Security researchers at Zscaler immediately warned developers not to download or run any GitHub repositories claiming to contain Claude Code (Source 58). Why? Because examining source code reveals every security weakness, every authentication check, every place the software connects to the internet. Bad actors can now study exactly how Claude Code works and find ways to bypass its safety guardrails or trick it into doing things Anthropic never intended.
Within a day of the leak, Zscaler's threat team discovered malware campaigns using "Claude Code leak" as bait — fake repositories that promised the leaked code but actually delivered Vidar and GhostSocks malware to anyone who downloaded them (Source 58). Think of it like this: if someone accidentally published the complete blueprints for your home security system, criminals would study them to find the best way to break in. The leaked code is Claude Code's security blueprint.
For a company that built its reputation on AI safety, this is deeply embarrassing. Anthropic's CEO Dario Amodei recently made headlines refusing to let the Pentagon use Claude for mass surveillance or autonomous weapons (Source 55). The company positions itself as the responsible alternative to OpenAI. But two major leaks in two months suggest serious problems with how Anthropic handles its own internal security (Source 55). As Gartner analyst Arun Chandrasekaran told The Verge: this should be "a call for action for Anthropic to invest more in processes and tools for better operational maturity" (Source 59).
Sources