Cory Doctorow makes a startling claim: the leak of an AI company's source code isn't a disaster, but a vital public service. While the tech industry scrambles to erase the breach, Doctorow argues that this moment exposes a dangerous legal weapon used to hide corporate malfeasance. For busy readers tracking the intersection of technology and democracy, this piece offers a crucial warning about how copyright law is being weaponized to suppress truth.
The Weaponization of Takedown Notices
Doctorow begins by dissecting the immediate reaction to the leak of Claude Code, Anthropic's flagship coding assistant. He notes that the company is "flooding the internet with 'takedown notices,'" a tactic enabled by a specific provision of the 1998 Digital Millennium Copyright Act. The author explains that this law creates a system where intermediaries like web hosts and search engines face massive financial penalties—up to $150,000 per infringement—if they do not instantly remove content upon request.
The core of the argument is that this legal framework forces platforms to act as censors without any judicial oversight. As Doctorow puts it, "In practice, that means that anyone can send a notice to any intermediary and have anything removed from the internet." This dynamic creates a chilling effect where the mere accusation of copyright infringement is enough to erase information permanently. The author's analysis is particularly sharp here because it moves beyond the technical glitch to the structural vulnerability it reveals: a system designed to protect intellectual property has been inverted to protect corporate secrets.
Critics might argue that without these strict liability rules, platforms would be overwhelmed by genuine infringement, making the internet unusable for creators. However, Doctorow counters that the current system is so tilted that it allows bad actors to scrub the internet of damaging truths with little consequence.
A History of Corporate Censorship
To illustrate the stakes, Doctorow reaches back to 2003, drawing a parallel to the Diebold voting machine scandal. He reminds readers that when leaked memos revealed the company knew its machines were insecure, "Diebold sent thousands of DMCA 512 takedown notices in an attempt to suppress the leaked memos." This historical reference is not just an anecdote; it establishes a pattern where the same legal tool used to hide voting machine flaws is now being used to hide AI code. The connection to the Brooks Brothers riot and the 2000 election debacle adds weight to the argument, showing that the consequences of such censorship can alter democratic outcomes.
The author expands this timeline to include the 2007 AACS encryption key controversy, where an entire industry consortium tried to ban a 16-digit number from the internet. "The position of the industry consortium that created the key was that this was an illegal integer," Doctorow writes, highlighting the absurdity of treating a mathematical sequence as a copyright violation. He argues that it was only the "determined action of an army of users" that kept the information alive. This historical context strengthens his claim that the current takedown blitz against the Claude leak is not a new phenomenon, but a recurring strategy for the powerful to control the narrative.
The takedown system is so tilted in favor of censorship that it takes a massive effort to keep even the smallest piece of information online in the face of a determined adversary.
The Economics of Suppression
Doctorow then pivots to the modern economy of reputation management, describing how this legal mechanism has become a profitable industry for scrubbing the internet of evidence regarding war crimes, fraud, and abuse. He points out that "there's a whole industry of shady 'reputation management' companies that collect large sums in exchange for scrubbing the internet of information their clients want removed from the public eye." The author cites the case of Jeffrey Epstein, who spent tens of thousands to clean up his profile, and the tactics of firms like Eliminalia, which create fake articles to generate takedown targets.
This section is particularly effective because it connects abstract legal theory to tangible human harm. The author argues that the system is not just flawed; it is actively predatory. "My favorite is the one employed by Eliminalia... They set up WordPress sites and copies press articles that cast its clients in an unfavorable light to these sites, backdating them so they appear to have been published before the originals." This description of "reputation laundry" reveals a dark underbelly of the internet where truth is negotiable for those with enough money.
The Trap of Corporate-Led AI Regulation
The commentary culminates in a critique of how media companies are approaching AI regulation. Doctorow observes that major studios are demanding new copyright rules to control AI training, framing it as a defense of artists. However, he argues this is a ruse. "Here's a good rule of thumb: any time your boss demands a new rule, you should be very skeptical about whether that rule will benefit you," he writes. The author suggests that these companies are not trying to stop AI from replacing workers; they simply want to monopolize the technology.
He contrasts this corporate strategy with the successful Hollywood writers' strike, which focused on labor rights rather than copyright expansion. "The writers weren't demanding a new copyright that would allow them to control whether their work could be used to train an AI. They struck for the right not to have their wages eroded by AI," Doctorow explains. This distinction is vital for understanding the current political landscape. The author warns that if media companies succeed in expanding copyright to block AI analysis, they will use those powers to enrich themselves, not the workers.
Just because you're on their side, it doesn't mean they're on your side.
The piece concludes by returning to the immediate leak, noting that the code contains information about real-world harms, including the potential involvement of AI in military actions. The author implies that the public has a right to know these details, regardless of the corporate desire to keep them hidden.
Bottom Line
Doctorow's strongest asset is his ability to weave historical precedents like the Diebold scandal into a coherent argument about current events, demonstrating that the abuse of copyright law is a systemic feature, not a bug. The piece's biggest vulnerability lies in its assumption that public access to leaked code will inevitably lead to better outcomes, ignoring the potential for malicious actors to weaponize the same information. Readers should watch for how the administration and tech giants respond to this leak, as the outcome will likely set the precedent for future battles over transparency and corporate secrecy.