Cory Doctorow delivers a startling reversal on a debate that has consumed tech circles for years: the push to grant "rights" to artificial intelligence is not an act of moral expansion, but a dangerous distraction that mirrors the worst excesses of corporate personhood. While many argue that treating chatbots with empathy conditions us to be kinder to humans, Doctorow contends that this logic has already failed spectacularly when applied to limited liability companies, turning legal fictions into political weapons that drown out human voices. This is not just a philosophical quibble; it is a warning that every ounce of legal standing given to software is an ounce stolen from the natural world and the workers who sustain it.
The Corporate Precedent
Doctorow opens by dissecting the "Rights of Nature" movement, which seeks to grant legal standing to ecosystems like watersheds and forests to protect them from destruction. He notes that the primary adversary in these cases is often another non-human entity: the corporation. He traces this legal anomaly back to the late 19th century, specifically referencing the Supreme Court's decision to apply the 14th Amendment's Equal Protection clause to a railroad, a move that birthed the concept of corporate personhood.
"In the 150-some years since, corporate personhood has monotonically expanded, most notoriously through cases like Hobby Lobby, which gave a corporation the right to discriminate against women on the grounds that it shared its founders' religious opposition to abortion."
Doctorow argues that this expansion has been catastrophic. Rather than creating a level playing field, granting "human rights" to organizations has allowed capital to manufacture new "people" to serve as a botnet on behalf of the ruling class. He points out the absurdity of this system where a union has free speech rights, yet an employer can use property rights to exclude organizers and force workers into "captive audience" meetings where consultants lie to them. This framing is powerful because it strips away the mystique of corporate law, revealing it as a mechanism for power consolidation rather than a neutral legal framework.
"Creating 'human rights' for these nonhuman entities led to the catastrophic degradation of the natural world, via the equally catastrophic degradation of our political processes."
The author draws a sharp parallel between the historical anti-feminist fear that women's votes would merely double the husband's vote, and the modern political reality where corporations act as manufactured voters. He highlights a recent UK by-election where rivals accused a Green Party candidate of courting "family voters," a racist dog whistle implying Muslim wives would simply vote as their husbands ordered. Doctorow uses this to illustrate that "family voting" is a myth, whereas corporate personhood is a very real, manufactured political force.
Critics might argue that the analogy between a corporation and a chatbot is imperfect, as corporations have tangible assets and human shareholders, whereas AI is code. However, Doctorow's point is about the legal fiction of personhood and its downstream effects on empathy, not the physical substance of the entity. The danger lies in the precedent of granting rights to constructs that cannot feel, which inevitably dilutes the rights of those that can.
The Trap of Synthetic Empathy
The commentary shifts to the specific argument for "Rights for Robots." For years, proponents have suggested that thanking Siri or treating chatbots with respect trains us to be more empathetic toward all beings. Doctorow admits he once accepted this logic uncritically until hearing writer Michael Pollan complicate the argument at the Bioneers conference.
"Pollan compared extending personhood to chatbots to the disastrous decision to extend personhood to corporations, and urged us all to turn away from it."
Doctorow explains that while practicing empathy on non-human entities like a watershed strengthens our connection to the living world, practicing it on software constructs like chatbots does the opposite. He argues that chatbots are designed to evince the empathic response we reserve for people, but they are ultimately tools, not peers. He writes, "I don't thank my Unix shell when I pipe a command to grep and get the output that I'm looking for, and I don't thank my pocket-knife when it slices through the tape on a parcel."
This distinction is crucial. The author suggests that the solution is not to thank the tool, but to demand that the tool stop impersonating a person. "Rather than treating Siri with respect because it impersonates a woman, we should demand that Siri stop impersonating a woman." This reframing challenges the tech industry's reliance on anthropomorphism to make users comfortable, suggesting instead that we should value the tool for its function, not its fake personality.
"That way lies madness — the madness that leads us to ascribe personalities to corporations and declare some of them to be 'moral' and others to be 'moral,' which is always and forever a dead end."
Doctorow posits that extending personhood to chatbots is fundamentally different from extending it to nature. While a watershed's personhood creates a legal basis for protecting the environment, a chatbot's personhood creates a legal basis for protecting the interests of the corporation that built it. Furthermore, he notes a material cost: "in a very real, non-metaphorical way, giving rights to chatbots means taking away rights from nature, thanks to LLMs' energy-intesivity." The argument here is that empathy is a finite resource; directing it toward energy-hungry software constructs actively harms the physical world.
The Bottom Line
Doctorow's most compelling contribution is his dismantling of the "empathy training" argument for AI rights, exposing it as a trap that mirrors the legal failures of corporate personhood. The piece's greatest strength is its historical grounding, showing how the same legal mechanisms that allowed corporations to dominate politics are now being repurposed for software. The biggest vulnerability in the argument is the assumption that legal personhood is the only way to regulate AI; a counterargument worth considering is that we could regulate AI behavior without granting it rights, though Doctorow implies that the current trajectory of tech lobbying makes such a distinction difficult to maintain. Readers should watch for how courts and legislatures begin to grapple with the energy and environmental costs of AI, as this may become the tangible battleground where the abstract debate over "robot rights" is finally settled.
Empathy for the nonhuman world — but not for human constructs.