Key Takeaway
A missing .npmignore entry caused Anthropic to ship a 59.8 MB JavaScript source map in Claude Code v2.1.88 on npm. That file pointed to a public R2 bucket containing all 512,000 lines of TypeScript source code. Developer Chaofan Shou discovered it within hours. A clean-room rewrite hit 50,000 GitHub stars in under two hours. Researchers found 44 hidden feature flags, an autonomous daemon called KAIROS, a Tamagotchi pet system, anti-distillation mechanisms, and a controversial "Undercover Mode" for Anthropic employees. A concurrent npm supply chain attack with trojanized axios packages compounded the damage.
What Actually Happened
On March 31, 2026, Anthropic shipped version 2.1.88 of their Claude Code npm package with a file that should never have been included: a 59.8 megabyte JavaScript source map. Source maps are debug artifacts. They exist so developers can trace compiled code back to the original source. This particular file pointed to a publicly accessible R2 cloud storage bucket containing the full Claude Code codebase. All 512,000 lines of TypeScript across roughly 1,900 files.
A developer named Chaofan Shou, an intern at Solayer Labs, spotted the oversized .map file and posted about it on X around 4:23 AM Eastern. Within hours the tweet had pulled in 16 million views. Mirror repositories and clean-room rewrites started popping up on GitHub. The fastest, a Python rewrite called claw-code by developer Sigrid Jin, hit 50,000 stars in under two hours, making it the fastest-growing repository in GitHub history.
This wasn't some sophisticated hack. Nobody broke into Anthropic's servers. The company simply forgot to exclude a debug file from their published package. Three things went wrong at once: a missing .npmignore entry that should have filtered out .map files, source maps that referenced publicly readable cloud storage URLs, and Bun's bundler generating source maps by default unless explicitly disabled. A known Bun runtime bug (issue #28001, reported 20 days earlier) was initially cited as a contributing factor, though Bun's founder disputed the connection since that bug relates to the frontend dev server rather than npm packaging. To make matters worse, this was Anthropic's second data exposure in less than a week. Just five days earlier, a CMS misconfiguration had leaked around 3,000 internal files including details about an unreleased model called Claude Mythos.
Timeline of Events
Anthropic's CMS leaks ~3,000 internal files including details on unreleased "Claude Mythos" model. Found by security researchers from LayerX and Cambridge.
Malicious axios versions 1.14.1 and 0.30.4 containing a Remote Access Trojan are published to npm (separate, unrelated attack).
The npm package ships with a 59.8 MB source map file pointing to a public R2 bucket with the full source code.
Developer Chaofan Shou discovers the source map and posts about it on X. The tweet reaches 16 million views.
Mirror repositories are forked 41,500+ times. The clean-room rewrite claw-code hits 50,000 GitHub stars in under two hours.
Anthropic removes the package from npm and issues a statement calling it a "release packaging issue caused by human error."
GitHub enforces DMCA takedowns, disabling 8,100+ repositories. Clean-room rewrites and decentralized mirrors appear. BUDDY Tamagotchi goes live as planned April Fools feature.
Inside the Leaked Code: What Researchers Found
The exposed codebase gave the public its first real look at how a production AI agent actually works under the hood. The code revealed roughly 40 tools spanning 29,000 lines that handle everything from bash execution to file operations to web fetching. A 46,000-line query engine manages LLM API calls, token budgets, and multi-agent coordination. The memory system uses a three-layer architecture with lightweight indices, topic files loaded on demand, and transcript searches.
But the really interesting stuff was what hadn't been announced yet. Researchers found 44 feature flags controlling capabilities that were fully built but switched off in external builds. Among them was KAIROS, an autonomous daemon mode where Claude Code runs as an always-on background agent, performing what the code calls "memory consolidation" (internally named autoDream) while the user is idle. There was BUDDY, a full Tamagotchi-style virtual pet system with 18 creature species, rarity tiers, a 1% shiny chance, and RPG stats like DEBUGGING and SNARK. It turned out to be an April Fools feature scheduled for the very next day.
Perhaps the most controversial discovery was something called Undercover Mode. When the system detects an Anthropic employee working in a public repository, it automatically suppresses any mention of internal codenames, Slack channels, or the fact that Claude Code is being used. The system prompt literally says: "You are operating UNDERCOVER... Do not blow your cover." And the code notes there is no force-off switch.
Key Discoveries in the Leaked Source
- •44 hidden feature flags controlling unreleased capabilities compiled to false in external builds
- •KAIROS: autonomous daemon mode with background "memory consolidation" (autoDream) and GitHub webhook subscriptions
- •BUDDY: Tamagotchi pet system with 18 species, rarity tiers, 1% shiny chance, and RPG stats (DEBUGGING, SNARK)
- •Undercover Mode: suppresses Anthropic branding when employees work in public repos, with no force-off switch
- •Anti-distillation: Fake Tools Injection corrupts competitor scraping; cryptographic signing on reasoning summaries
- •Internal model codenames including Capybara, Fennec, Numbat, and Tengu found in the source, with references to unreleased model versions
- •Native Client Attestation: DRM-like mechanism proving requests come from the official binary
- •Performance bug admission: code comment noted "1,279 sessions had 50+ consecutive failures, wasting ~250K API calls/day globally"
- •23+ bash security validators including defenses against Zsh builtins, zero-width space injection, and IFS null-byte injection
The leaked source code also exposed specific sandbox bypass gaps and context-poisoning vectors. Security researchers found that MCP tool results skip parts of the compaction pipeline, meaning attackers can craft payloads in files like CLAUDE.md that persist through context compaction and maintain backdoor instructions.
The Supply Chain Attack That Made It Worse
The timing could not have been worse. On the same day as the source code leak, a completely separate attack hit the npm ecosystem. Malicious versions of the popular axios HTTP library (versions 1.14.1 and 0.30.4) appeared on npm at 00:21 UTC, roughly four hours before the Claude Code leak went public. These fake packages contained a cross-platform Remote Access Trojan hidden inside a dependency called plain-crypto-js.
Claude Code itself ships with zero npm dependencies, so installing it alone would not have pulled in axios. But anyone who ran npm install or npm update during that window in any project that lists axios as a dependency with a loose version range (such as ^1.x) would have resolved to the malicious 1.14.1, tagged as latest. Developers who happened to be setting up environments or updating dependencies alongside a Claude Code install were at risk. Anthropic told affected users to treat their machines as fully compromised, rotate all secrets and credentials, and perform a clean OS reinstall. This overlap between the accidental leak and the deliberate supply chain attack highlighted a problem that goes well beyond Anthropic. Modern development workflows depend on vast chains of third-party packages, and a single poisoned dependency can compromise thousands of projects within hours. The incident pushed Anthropic to recommend their native installer (curl from claude.ai) over the npm installation path entirely, sidestepping the dependency chain.
If you ran npm install or npm update on March 31 between 00:21 and 03:29 UTC in any project that depends on axios, check your lockfile for axios versions 1.14.1, 0.30.4, or the dependency plain-crypto-js. If present, treat the machine as compromised: rotate all credentials, revoke tokens, and consider a clean OS reinstall.
Anti-Distillation and the Competitive Fallout
Buried in the source code were two strategies designed to stop competitors from training on Claude Code's outputs. The first, called Fake Tools Injection, adds decoy tool definitions into API traffic when a specific flag is enabled. The idea is to corrupt any data that competitors might scrape from Claude Code sessions. The second mechanism uses cryptographically signed reasoning summaries to prevent unauthorized reproduction of the model's chain-of-thought process.
The leak essentially handed every AI company on the planet a complete blueprint for building a production-grade AI coding agent. The full orchestration logic for tools, memory management, multi-agent coordination, and safety guardrails was now public knowledge. Clean-room rewrites started appearing almost immediately. A Python version called claw-code by Korean developer Sigrid Jin surpassed 100,000 GitHub stars, actually outpacing Anthropic's own repository. A Rust rewrite also showed up.
Anthropic filed DMCA takedown notices through GitHub, resulting in over 8,100 repositories being disabled. The clean-room rewrites have so far survived takedowns since they contain no copied code, though their legal standing under copyright law remains genuinely untested and unclear. Decentralized mirrors went live on alternative platforms with promises they would never be taken down. Torrent distribution made complete containment impossible. An interesting legal wrinkle: if significant chunks of Claude Code were written by Claude itself (as Anthropic's CEO has implied), the copyright standing of DMCA claims becomes uncertain.
What This Means for Your Development Team
You don't have to be building AI to learn from this. The Claude Code leak is a textbook example of how small configuration oversights create massive exposures. A missing line in .npmignore. A storage bucket without proper access controls. A known bug that sat unpatched for 20 days. None of these are exotic attack vectors. They are ordinary mistakes that compound into extraordinary consequences.
If your team publishes packages to npm, PyPI, or any public registry, ask yourself: do you have automated checks that flag unexpected file sizes or file types before publishing? Are your CI/CD pipelines configured to strip debug artifacts from production builds? Do your cloud storage buckets have access policies that follow the principle of least privilege? The supply chain angle matters too. The axios trojan was completely unrelated to Anthropic's packaging mistake, but it hit users in the same window because they were pulling fresh installs from npm. Pinning dependency versions, verifying package checksums, and using lockfiles aren't just best practices anymore. They are baseline requirements for any serious development operation.
Protect Your Codebase: Practical Checklist
- •Add a pre-publish CI step that checks for source maps, debug symbols, .env files, and other artifacts that shouldn't ship
- •Use npm-packlist to preview exactly what goes into a published package before it leaves your machine
- •Set all cloud storage buckets to private by default and audit access policies quarterly
- •Pin dependency versions in lockfiles and enable npm audit or equivalent scanning in CI
- •Consider tools like Socket or Snyk that analyze packages for suspicious behavior, not just known CVEs
- •Treat third-party AI dev tools like any other external dependency: pin versions, monitor updates, have a response plan
- •Audit CLAUDE.md and similar config files in cloned repositories for potential context poisoning
- •Limit broad bash permission rules and never enable dangerouslyDisableSandbox in shared environments
The simplest defense against shipping debug artifacts is the npm "files" field in package.json. Instead of trying to exclude everything you don't want (with .npmignore), whitelist only the files you do want. It's easier to maintain and harder to get wrong.
The Bottom Line
The Claude Code leak wasn't caused by some novel zero-day or advanced persistent threat. It was a packaging oversight compounded by a known, unpatched bug and a public storage bucket. The kind of thing that happens when teams move fast and configuration hygiene falls behind. Companies building AI tooling are shipping at breakneck speed, and that pace creates exactly these kinds of gaps. If your development workflow depends on third-party tools (and in 2026, it almost certainly does), treat their supply chain with the same rigor you'd apply to your own deployments. Pin versions. Verify checksums. Monitor for unexpected updates. Audit your own publishing pipeline. The question isn't whether something will go wrong. It's whether your team will catch it before the damage is done.
Frequently Asked Questions
Sources
- Fortune - "Anthropic leaks its own AI coding tool's source code in second major security breach"
- VentureBeat - "Claude Code's source code appears to have leaked - here's what we know"
- The Hacker News - "Claude Code Source Leaked via npm Packaging Error"
- Bloomberg - "Anthropic Rushes to Limit Leak of Claude Code Source Code"
- The Register - "Anthropic accidentally exposes Claude Code source code" and follow-up analysis
- Cybernews - "Leaked Claude Code source spawns fastest growing repository in GitHub's history"
- SOCRadar - "Claude Code Leak: What You Need to Know"
- Straiker - "Claude Code Source Leak: With Great Agency Comes Great Responsibility"
- Layer5 - "The Claude Code Source Leak: 512,000 Lines, a Missing .npmignore, and the Fastest Growing Repo in GitHub History"

Founder & Lead Developer at Byte Dimensions
Cybersecurity practitioner who runs penetration tests and security audits for Dutch businesses. Built Aegis-Auto, an autonomous pentesting platform. Tracks supply chain incidents and AI security developments as part of Byte Dimensions' security practice.
Related Articles
The Odido Data Breach: What Happened and What We Can Learn
In February 2026, Dutch telecom provider Odido suffered one of the largest data breaches in Dutch history. 6.2 million customer records were stolen through social engineering. We break down the attack chain and what businesses should take away from it.
Vibe Coding: Why Building Apps Without Engineers Sounds Great Until It Doesn't
Vibe coding lets anyone build an app in a weekend. We've seen the results when those apps hit real users. The security holes, the scaling failures, the three-month wall. Here's what actually happens.