← Back to Library

It's extremely good that Claude's source-code leaked

Cory Doctorow makes a startling claim: the leak of an AI company's source code isn't a disaster, but a vital public service. While the tech industry scrambles to erase the breach, Doctorow argues that this moment exposes a dangerous legal weapon used to hide corporate malfeasance. For busy readers tracking the intersection of technology and democracy, this piece offers a crucial warning about how copyright law is being weaponized to suppress truth.

The Weaponization of Takedown Notices

Doctorow begins by dissecting the immediate reaction to the leak of Claude Code, Anthropic's flagship coding assistant. He notes that the company is "flooding the internet with 'takedown notices,'" a tactic enabled by a specific provision of the 1998 Digital Millennium Copyright Act. The author explains that this law creates a system where intermediaries like web hosts and search engines face massive financial penalties—up to $150,000 per infringement—if they do not instantly remove content upon request.

It's extremely good that Claude's source-code leaked

The core of the argument is that this legal framework forces platforms to act as censors without any judicial oversight. As Doctorow puts it, "In practice, that means that anyone can send a notice to any intermediary and have anything removed from the internet." This dynamic creates a chilling effect where the mere accusation of copyright infringement is enough to erase information permanently. The author's analysis is particularly sharp here because it moves beyond the technical glitch to the structural vulnerability it reveals: a system designed to protect intellectual property has been inverted to protect corporate secrets.

Critics might argue that without these strict liability rules, platforms would be overwhelmed by genuine infringement, making the internet unusable for creators. However, Doctorow counters that the current system is so tilted that it allows bad actors to scrub the internet of damaging truths with little consequence.

A History of Corporate Censorship

To illustrate the stakes, Doctorow reaches back to 2003, drawing a parallel to the Diebold voting machine scandal. He reminds readers that when leaked memos revealed the company knew its machines were insecure, "Diebold sent thousands of DMCA 512 takedown notices in an attempt to suppress the leaked memos." This historical reference is not just an anecdote; it establishes a pattern where the same legal tool used to hide voting machine flaws is now being used to hide AI code. The connection to the Brooks Brothers riot and the 2000 election debacle adds weight to the argument, showing that the consequences of such censorship can alter democratic outcomes.

The author expands this timeline to include the 2007 AACS encryption key controversy, where an entire industry consortium tried to ban a 16-digit number from the internet. "The position of the industry consortium that created the key was that this was an illegal integer," Doctorow writes, highlighting the absurdity of treating a mathematical sequence as a copyright violation. He argues that it was only the "determined action of an army of users" that kept the information alive. This historical context strengthens his claim that the current takedown blitz against the Claude leak is not a new phenomenon, but a recurring strategy for the powerful to control the narrative.

The takedown system is so tilted in favor of censorship that it takes a massive effort to keep even the smallest piece of information online in the face of a determined adversary.

The Economics of Suppression

Doctorow then pivots to the modern economy of reputation management, describing how this legal mechanism has become a profitable industry for scrubbing the internet of evidence regarding war crimes, fraud, and abuse. He points out that "there's a whole industry of shady 'reputation management' companies that collect large sums in exchange for scrubbing the internet of information their clients want removed from the public eye." The author cites the case of Jeffrey Epstein, who spent tens of thousands to clean up his profile, and the tactics of firms like Eliminalia, which create fake articles to generate takedown targets.

This section is particularly effective because it connects abstract legal theory to tangible human harm. The author argues that the system is not just flawed; it is actively predatory. "My favorite is the one employed by Eliminalia... They set up WordPress sites and copies press articles that cast its clients in an unfavorable light to these sites, backdating them so they appear to have been published before the originals." This description of "reputation laundry" reveals a dark underbelly of the internet where truth is negotiable for those with enough money.

The Trap of Corporate-Led AI Regulation

The commentary culminates in a critique of how media companies are approaching AI regulation. Doctorow observes that major studios are demanding new copyright rules to control AI training, framing it as a defense of artists. However, he argues this is a ruse. "Here's a good rule of thumb: any time your boss demands a new rule, you should be very skeptical about whether that rule will benefit you," he writes. The author suggests that these companies are not trying to stop AI from replacing workers; they simply want to monopolize the technology.

He contrasts this corporate strategy with the successful Hollywood writers' strike, which focused on labor rights rather than copyright expansion. "The writers weren't demanding a new copyright that would allow them to control whether their work could be used to train an AI. They struck for the right not to have their wages eroded by AI," Doctorow explains. This distinction is vital for understanding the current political landscape. The author warns that if media companies succeed in expanding copyright to block AI analysis, they will use those powers to enrich themselves, not the workers.

Just because you're on their side, it doesn't mean they're on your side.

The piece concludes by returning to the immediate leak, noting that the code contains information about real-world harms, including the potential involvement of AI in military actions. The author implies that the public has a right to know these details, regardless of the corporate desire to keep them hidden.

Bottom Line

Doctorow's strongest asset is his ability to weave historical precedents like the Diebold scandal into a coherent argument about current events, demonstrating that the abuse of copyright law is a systemic feature, not a bug. The piece's biggest vulnerability lies in its assumption that public access to leaked code will inevitably lead to better outcomes, ignoring the potential for malicious actors to weaponize the same information. Readers should watch for how the administration and tech giants respond to this leak, as the outcome will likely set the precedent for future battles over transparency and corporate secrecy.

Deep Dives

Explore these related deep dives:

  • The Public Domain: Enclosing the Commons of the Mind Amazon · Better World Books by James Boyle

  • Brooks Brothers riot

    This 2000 incident serves as a historical precedent for how coordinated political actors can weaponize legal and logistical mechanisms to disrupt democratic processes, mirroring the article's concern about using DMCA takedowns to suppress information.

  • Chilling effect

    The article describes how the threat of massive statutory damages forces intermediaries to remove content without scrutiny, a phenomenon this legal concept defines as the suppression of lawful speech due to fear of litigation.

Sources

It's extremely good that Claude's source-code leaked

by Cory Doctorow · Pluralistic · Read full article

Today's links.

It's extremely good that Claude's source-code leaked: Careful what you wish for. Hey look at this: Delights to delectate. Object permanence: "Elephantmen"; Eve Online war bankrolled by casino; Phishers take Mattel for $3m; Sanders wins 6 primaries, CNN airs Jesus "documentary"; Cuba is a vaccine powerhouse; Embroidered Toast; Mass layoffs at ATT. Upcoming appearances: Toronto, Montreal, Toronto, San Francisco, London, Berlin, NYC, Hay-on-Wye, London. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest.

It's extremely good that Claude's source-code leaked (permalink).

Anthropic's developers made an extremely basic configuration error, and as a result, the source-code for Claude Code – the company's flagship coding assistant product – has leaked and is being eagerly analyzed by many parties:

https://news.ycombinator.com/item?id=47586778

In response, Anthropic is flooding the internet with "takedown notices." These are a special kind of copyright-based censorship demand established by section 512 of the 1998 Digital Millennium Copyright Act (DMCA 512), allowing for the removal of material without any kind of evidence, let alone a judicial order:

https://www.removepaywall.com/search?url=https://www.wsj.com/tech/ai/anthropic-races-to-contain-leak-of-code-behind-claude-ai-agent-4bc5acc7

Copyright is a "strict liability" statute, meaning that you can be punished for violating copyright even if you weren't aware that you had done so. What's more, "intermediaries" – like web hosts, social media platforms, search engines, and even caching servers – can be held liable for the copyright violations their users engage in. The liability is tremendous: the DMCA provides for $150,000 per infringement.

DMCA 512 is meant to offset this strict liability. After all, there's no way for a platform to know whether one of its users is infringing copyright – even if a user uploads a popular song or video, the provider can't know whether they've licensed the work for distribution (or even if they are the creator of that work). A cumbersome system in which users would upload proof that they have such a license wouldn't just be onerous – it would still permit copyright infringement, because there's no way for an intermediary to know whether the distribution license the user provided was genuine.

As a compromise, DMCA 512 absolves intermediaries from liability, if they "expeditiously remove" material upon notice that it infringes someone's copyright. In practice, that means that anyone can send a notice to any intermediary and have anything removed from the internet. The intermediary who receives ...