Claude Code leak explained: how one file exposed entire AI tool

A fresh AI security incident has caught the attention of developers and researchers. On March 31, 2026, security researcher Chaofan Shou posted on X, “Claude code source code has been leaked via a map file in their npm registry.” Within hours, what looked like a small oversight turned into a full-blown exposure of a major AI tool’s internal code.

The leak involved the CLI tool built by Anthropic for its Claude models. The company was not hacked. Instead, a simple configuration mistake led to the exposure of its entire TypeScript codebase online, raising fresh questions around software supply chain risks.

How a small file exposed 5 lakh lines of code

The issue started with something called a source map file. These files are meant for debugging. They help developers trace errors in production code back to original readable code.

In this case, the file was accidentally included in the npm package. It was not blocked using the .npmignore file. The result was unexpected. Anyone could download the full source code from Anthropic’s own storage.

The exposed data includes:

  • Around 5.12 lakh lines of code
  • Nearly 1,900 files
  • Written in strict TypeScript

This was not a partial leak. It was the entire CLI system behind Claude Code.

What the leaked code reveals about Claude

The code shows that the system is much more complex than it looks on the surface. One file called QueryEngine handles API calls, streaming, caching, and multi step interactions. Another module defines tools and permissions for different actions.

There are also around 85 commands built into the system. The tool layer supports operations like file reads, bash execution, web access, and even spawning sub agents.

One interesting detail is the multi agent setup. The system can create parallel agents using different models. Some share the same context. Others run in separate workspaces.

The code also shows a hidden feature called “Buddy”. It is described as a Tamagotchi style system with traits and generated personalities. It sits behind a feature flag and is not visible to users.

Second exposure in a week

This is the second issue linked to Anthropic in just a few days. Earlier, a CMS error exposed details about an unreleased model called Claude Mythos.

Neither incident was an attack; they were configuration mistakes. Still, they highlight how even advanced AI companies can run into basic issues.