April 01, 2026 ChainGPT

Anthropic's Claude Code Leak Sparks Clean-Room Rewrite as Decentralized Mirrors Make It Permanent

Anthropic's Claude Code Leak Sparks Clean-Room Rewrite as Decentralized Mirrors Make It Permanent
Anthropic accidentally pushed a huge chunk of Claude Code into the wild this week — and, as crypto folks know well, once something hits distributed infrastructure it’s effectively impossible to erase. What happened - Early Tuesday, Anthropic released Claude Code v2.1.88 to npm. The package included a 59.8 MB JavaScript source map — a debug file that can reconstruct the original, human-readable source from minified code. Source maps are meant to stay private; a single oversight in the package ignore settings let it slip out. - Researcher/intern Chaofan Shou appears to be among the first to spot it and posted a download link to X at ~4:23 a.m. ET. The thread exploded — reportedly drawing millions of views — and although Anthropic quickly removed the npm package, copies had already spread. Observers archived roughly 512,000 lines of code across about 1,900 files. Anthropic’s response - Anthropic told Decrypt this was a release-packaging error caused by human error and that no sensitive customer data or credentials were exposed. The company says it’s implementing measures to prevent a recurrence. What the leak revealed - The files exposed a large portion of Claude Code’s internal architecture: LLM API orchestration, multi-agent coordination, permission logic, OAuth flows and 44 hidden feature flags for unreleased features. - Notable finds included Kairos, an always-on background daemon that stores memory logs and performs nightly “dreaming” to consolidate knowledge, and Buddy — a Tamagotchi-like AI companion with species, rarities and stats such as debugging, patience, chaos and wisdom (a teaser rollout was apparently planned for April 1–7). - Security researchers also uncovered an “Undercover Mode” subsystem designed to stop the model from leaking Anthropic’s internal codenames; one injected system prompt bluntly reads, “Do not blow your cover.” The GitHub backlash — and a clean-room twist - Anthropic began issuing DMCA takedowns for GitHub mirrors of the leaked code. Those takedowns work on centralized platforms, which are legally compelled to comply. - Within hours, however, a Korean developer named Sigrid Jin reportedly ported the core architecture from the leaked TypeScript into Python using an AI orchestration tool called oh-my-codex and published “claw-code.” That repo rocketed to 30,000 stars faster than any GitHub repo in history. The maintainers argue this is a clean-room rewrite — a new, independently authored work that avoids direct copyright infringement claims. Legal fog - The copyright picture is messy. Recent court rulings have already muddied how copyright applies to AI-generated work; the DC Circuit’s March 2025 decision (and the Supreme Court’s refusal to hear the related appeal) left unresolved questions about ownership when large parts of code may have been authored by models themselves. Anthropic’s CEO has implied that parts of Claude Code were generated by Claude, which complicates potential takedown or copyright strategies. Decentralization = permanence - Decentralized mirrors made the leak effectively permanent. An account called @gitlawb mirrored the original code to Gitlawb, a decentralized git platform, posting “Will never be taken down.” Others compiled and shared Claude’s internal system prompts, which are of high interest to prompt engineers and jailbreak researchers. - The takeaway for the crypto community is clear: DMCA notices can force removals from centralized services like GitHub, but decentralized infrastructure (distributed git platforms, torrents, IPFS-style mirrors) has no single choke point. Once copies exist across those systems, the only practical question is how many mirrors and how resilient the hosting is. Why it matters - Beyond the immediate drama, the incident shows how fragile access controls can be, how valuable internal LLM orchestration code is, and how decentralization changes the calculus around corporate control of software. For crypto developers and infrastructure builders, it’s another reminder that censorship-resistance is not just a philosophical stance — it’s an operational reality that alters legal and technical strategies for protecting or exposing code. Bottom line - Anthropic’s accidental leak pulled back the curtain on one of the most advanced AI-code agents, sparked a rapid clean-room rewrite that tested DMCA limits, and underscored how decentralized platforms can make digital “takebacks” impossible. In the age of torrents, distributed git, and immutable mirrors, once code is out, it’s effectively out for good. Read more AI-generated news on: undefined/news