Anthropic confirmed that the full source code of its Claude Code command-line interface was inadvertently exposed through a source map file included in an official npm package, the company said on April 1, 2026.
The exposure occurred in version 2.1.88 of the package @anthropic-ai/claude-code, released on March 31, 2026. It involved a 60-megabyte file named cli.js.map that contained unminified TypeScript code spanning about 1,900 files and more than 512,000 lines, according to technical analyses of the leaked material.
Security researcher Chaofan Shou first identified the issue, which stemmed from a packaging error involving the Bun bundler that failed to exclude the map file, the analyses said. Anthropic described the incident as a release packaging issue caused by human error, not a security breach. The company pulled the affected package version shortly after the discovery.
Analyses of the leaked code showed it contained internal features such as a regex pattern in a file called userPromptKeywords.ts designed to detect profanity and expressions of frustration in user prompts for telemetry logging. The code also included provisions for an Undercover Mode that allows Anthropic employees to contribute to open-source repositories without AI attribution, the analyses said.
The code was mirrored on public GitHub repositories shortly after the leak became known, including one maintained by instructkr and another by Alex Kim. A discussion thread on Reddit’s r/ClaudeAI forum first drew widespread attention to the exposure on March 31, 2026.
Prediction market platform Polymarket referenced the leak in the post on X above, highlighting the profanity detection and telemetry elements as part of the revealed internals.
Anthropic has not commented further on the specific features disclosed in the code. No customer data or credentials were exposed in the incident, according to the company’s statement.
The leak has prompted discussions among developers and AI researchers about software packaging practices for tools that incorporate large language models. It also raised questions about transparency in how AI companies handle internal telemetry and operational safeguards in publicly distributed codebases.
Mainstream outlets including Ars Technica, The Register and VentureBeat reported on the episode, citing the same technical breakdowns and Anthropic’s confirmation. No legal or regulatory actions related to the leak have been reported as of April 1, 2026.














