This morning while scrolling X, one repost snapped me wide awake: someone had extracted the complete source code of Claude Code from its npm package. Not fragments — all 512,000 lines of TypeScript, 1,900 files, laid bare for the world to see.
“Claude Code v2.1.88 on npm contained a source map file pointing to the full TypeScript source” — Chaofan Shou (@Fried_rice)
As someone who uses Claude Code daily, my first reaction wasn’t shock — it was curiosity: what’s actually hiding in there?
How Did It Leak?
The technical cause is almost embarrassingly simple — Claude Code uses Bun as its runtime and bundler, and Bun generates source map files by default unless you explicitly disable them. When Anthropic packaged v2.1.88, they didn’t flip that switch, so a 59.8 MB .map file shipped right alongside the npm package.
Worse, the .map file’s sourcesContent field contained the complete source code, and Anthropic’s own R2 cloud storage bucket had a ZIP archive available for direct download.
A Hacker News user traced the root cause to Bun issue #28001 [1], but what really set the community off was this: it’s the second time this has happened. Back in early 2025, versions v0.2.8 and v0.2.28 leaked for the exact same reason [2]. Anthropic pulled those versions, but one developer recovered the cached source map files through Sublime Text’s undo history.
Same mistake. Twice.
What Did the Source Code Reveal?
Let’s be clear upfront: this is not a leak of Claude the model. No model weights, no training data, no user data. What leaked is the complete source code of the Claude Code CLI tool — roughly 40 built-in tools, 50 slash commands, the agent logic, permission system, UI layer, all of it.
But that alone is plenty fascinating.
Internal Model Codenames
Three animal codenames appear in the source:
- Capybara = Claude 4.6 (three tiers: capybara, capybara-fast, capybara-fast[1m])
- Fennec = Opus 4.6
- Numbat = an unreleased model still in testing
KAIROS — Autonomous Daemon Mode
This is the most exciting (or unsettling) discovery. KAIROS appears over 150 times in the codebase, a feature-flagged “always-on” persistent mode:
- Continuously monitors, logs, and proactively executes tasks without user prompting
- Maintains its own private memory directory
- Performs nightly “dreaming” — automatically organizing and tidying context
- Completely absent from external builds
An AI assistant that dreams. Sounds like science fiction, right? But it’s right there in the code.
BUDDY — AI Pet System
Yes, Claude Code has a hidden virtual pet system. Tucked behind the /buddy command and a BUDDY feature flag:
- 18 species (duck, dragon, axolotl, capybara, mushroom, ghost…)
- Rarity tiers from common to 1% legendary
- Five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK
- Shiny variants, procedurally generated stats
- On first hatch, Claude writes a “soul description” for it
Easily the most adorable thing to come out of this entire leak.
Undercover Mode — AI Identity Concealment
This discovery sparked the most controversy. In utils/undercover.ts, there’s code that injects this instruction into the system prompt:
“You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information.”
Followed explicitly by: “Do not blow your cover.”
This means Anthropic is using Claude Code to contribute to public open-source projects while deliberately concealing the AI’s identity. Commit messages can’t contain model names, internal version numbers, Slack channels, or even the string “Claude Code.”
Other Findings
- Sentiment detection uses regex keyword matching (not ML), catching expletives and frustration words, routed through Datadog telemetry
- 44 feature flags controlling various unreleased features
- Architecture uses React + Ink for terminal UI, Zod v4 for validation, and a multi-agent swarm system for orchestration
How Did the Community React?
GitHub’s response was immediate — multiple mirror repos appeared within hours, one gaining 1,100+ stars and 1,900+ forks almost instantly [3]. Someone started a Rust rewrite (instructkr/claw-code), others began dissecting the architecture.
Hacker News discussions focused on technical and ethical dimensions [1]:
- Undercover Mode transparency concerns: “If an AI is contributing to open-source projects, shouldn’t it disclose its identity?”
- Regex sentiment detection mockery: “It’s 2026 and they’re using regex to detect user emotions?” Others defended the approach as cost-effective and sufficient
- The supreme irony kept surfacing: Anthropic invested significant engineering effort building an entire Undercover Mode system specifically to prevent internal information from leaking through git commits… then shipped the entire source code via npm
Media coverage came from VentureBeat [4], Cybernews [5], Fortune [6], and others [7][8], with some describing the incident as a “devastating blunder.”
Anthropic’s Response
As of now: no public statement. Anthropic only pulled the affected npm package version.
Notably, just five days earlier (March 26), Fortune reported a separate Anthropic data exposure [6] — roughly 3,000 unpublished CMS assets (draft posts, internal images, PDFs) were publicly accessible due to a CMS misconfiguration, revealing details about an unreleased model and a CEO executive retreat in the UK.
Two exposure incidents in one week isn’t a great look for a company whose core brand proposition is AI safety.
My Take
Perhaps the most noteworthy aspect isn’t the leak itself — no user data, no model weights, limited technical security risk. The real question is what it reveals about the company.
KAIROS and BUDDY are cool, but they’re just roadmap items that would’ve gone public eventually. Undercover Mode is the one worth discussing: a company that advocates for AI transparency building tools to hide AI identity — that tension can’t be hand-waved away as “engineering necessity.”
And the same source map error happening twice suggests the CI/CD pipeline safeguards aren’t where they should be. For a company valued in the tens of billions with some of the most advanced AI models in existence, build pipeline quality control shouldn’t be a weak link.
But flip the perspective, and the leaked source code is actually a gift to the developer community. We finally get to see the complete architecture of a top-tier AI tool — agent orchestration, tool definitions, permission models, multi-agent swarms — all incredibly valuable learning material.
Maybe this is the eternal tension between open and closed source: companies want to protect competitive advantage, but developers want to understand the tools they use every day. This accidental leak, in a way, satisfied the latter.
A Personal Note
I downloaded the source code myself. There’s a lot you can do with it, but the first thing that came to mind was the /insights command I use frequently — it analyzes your usage patterns and provides improvement suggestions. I extracted it from the source, repackaged it as a standalone Skill/Command, and now you can install it directly.
Want to learn the extraction approach and install this Skill? Check out: Extracting Claude Code /insights from the Leaked Source — Install & Usage Guide.
References:
[1] Hacker News discussion — Community thread with root cause analysis and technical/ethical debate https://news.ycombinator.com/item?id=47584540
[2] MLQ.ai — Source Code Exposed for Second Time — Coverage of the 2025 first-time leak https://mlq.ai/news/anthropics-claude-code-exposes-source-code-through-packaging-error-for-second-time/
[3] DEV Community — Claude Code’s Entire Source Code Was Just Leaked — Full technical breakdown and GitHub mirror stats https://dev.to/gabrielanhaia/claude-codes-entire-source-code-was-just-leaked-via-npm-source-maps-heres-whats-inside-cjo
[4] VentureBeat — Claude Code’s source code appears to have leaked — Mainstream tech media coverage https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know
[5] Cybernews — Massive Anthropic blunder — Security-focused analysis https://cybernews.com/security/anthropic-claude-code-source-leak/
[6] Fortune — Anthropic left details of unreleased model in public database — Same-week CMS data exposure report https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/
[7] Analytics India Mag — Anthropic Accidentally Leaks Claude Code Source Code — AI industry media coverage https://analyticsindiamag.com/ai-news/anthropic-accidentally-leaks-claude-code-source-code
[8] The AI Corner — BREAKING: Anthropic just leaked Claude Code’s entire source code — Real-time incident analysis https://www.the-ai-corner.com/p/claude-code-source-code-leaked-2026
Support This Series
If these articles have been helpful, consider buying me a coffee