← Back to Library

Why Substack is at a crossroads

Casey Newton forces a uncomfortable reckoning for the digital age: the era of treating online platforms as neutral infrastructure is over, and Substack's refusal to adapt is now a direct financial liability. While the company clings to a libertarian ideal of absolute free speech, Newton dismantles the logic that "sunlight is the best disinfectant" when that sunlight is actively monetizing hate. This is not a theoretical debate about civil liberties; it is a practical analysis of how recommendation algorithms and subscription models have fundamentally altered the stakes of content moderation.

The Shift from Tool to Publisher

Newton begins by exposing a critical evolution in Substack's business model that many users have missed. The platform started as simple software for email newsletters, a role that allowed it to claim immunity from content moderation responsibilities. "If you wrote something truly awful in Word, after all, no one would blame Microsoft," Newton observes, noting how Substack initially benefited from this distance. However, the company has aggressively pivoted. It now employs algorithmic digests, mutual recommendation systems, and a Twitter-like feed called Notes to surface content.

Why Substack is at a crossroads

This shift changes the nature of the platform from a passive tool to an active curator. Newton argues that "the moment a platform begins to recommend content is the moment it can no longer claim to be simple software." This distinction is vital because it strips away the legal and moral shield Substack leadership has relied upon. By actively promoting content, the platform becomes complicit in what it amplifies. The argument lands with particular force because it highlights a specific vulnerability: unlike other social networks where extremists post for clout, Substack's model allows them to post for money. "Extremists on Facebook, Twitter, and YouTube for the most part had been posting for clout," Newton writes, contrasting this with the reality that "on Substack, on the other hand, extremists can post for money." This creates a perverse incentive structure where the platform's revenue share directly funds the spread of dangerous ideologies.

The Failure of the "Sunlight" Defense

The core of Substack's defense rests on a techno-libertarian belief that censorship only empowers bad ideas. Co-founder Hamish McKenzie argued that "censorship (including through demonetizing publications) makes the problem go away — in fact, it makes it worse." Newton acknowledges that this perspective is "reasonable for many or even most circumstances," yet he demonstrates why it fails when applied to the specific context of Holocaust denial and Nazi advocacy.

The piece draws a sharp parallel to Facebook's history. For years, Facebook permitted Holocaust denial, citing free speech principles, until the company reversed course in 2020. Newton notes that Mark Zuckerberg eventually admitted, "I've struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust." The shift occurred not because of a change in philosophy, but because the social cost became too high as memories faded and hate crimes surged. Newton points out that "until Substack, I was not aware of any major US consumer internet platform that stated it would not remove or even demonetize Nazi accounts." This isolation suggests that Substack is not leading a new frontier of free speech, but rather clinging to an outdated stance that the rest of the industry has already abandoned.

Critics might argue that drawing a line at Nazi ideology sets a slippery slope where platforms could eventually ban other unpopular political views. However, Newton counters that the Holocaust represents a unique historical atrocity where "there remains broad agreement that the slaughter of 6 million Jews during the Holocaust was an atrocity." The argument is that while platforms must navigate complex gray areas, there are bright red lines where the harm of allowing speech outweighs the principle of free expression.

If it won't remove the Nazis, why should we expect the platform to remove any other harm?

The Human Cost of Algorithmic Neutrality

The commentary takes a darker turn when examining the real-world consequences of Substack's inaction. Newton reminds readers that "turning a blind eye to recommended content almost always comes back to bite a platform." He cites the historical trajectory of Alex Jones, QAnon, and the anti-vaccine movement, all of which were amplified by recommendation engines on other platforms before being curbed. The danger is not just the existence of these voices, but the algorithmic acceleration of their reach.

The financial implication is immediate and personal. Newton reports that "dozens of paid subscribers to Platformer have canceled their memberships" in direct response to Substack's policy. One reader's sentiment captures the moral calculus: "I don't want to fund Nazis. I'm disturbed by a Substack leadership that looks at openly pro-Nazi content and says, 'We won't de-platform you. In fact, we'll monetize you.'" This is a powerful indictment of the platform's business model. It suggests that the "hands-off approach" is not just ethically questionable but economically unsustainable when the user base includes professionals in trust and safety who view the platform's stance as a spurning of their life's work.

Newton's analysis of the industry landscape reveals that other platforms have developed defenses against this exploitation, such as banning dangerous organizations or restricting their monetization. Substack's refusal to adopt similar measures leaves it uniquely exposed. "Every platform hosts its share of racists, white nationalists, and other noxious personalities," Newton concedes, but argues that "there ought to be ways to see them less; to recommend them less; to fund them less." The failure to implement these basic safeguards is framed not as a principled stand, but as a failure of governance.

Bottom Line

Newton's strongest argument is the dismantling of the "infrastructure" defense; once a platform curates and recommends content, it bears responsibility for the ecosystem it builds. The piece's greatest vulnerability is the potential for this stance to be co-opted by bad-faith actors seeking to ban any dissenting political view, though Newton carefully limits the scope to universally condemned hate speech. The reader should watch to see if Substack's financial losses from subscriber cancellations force a policy reversal, or if the company doubles down on its libertarian principles at the cost of its reputation and viability.

Sources

Why Substack is at a crossroads

by Casey Newton · Platformer · Read full article

Most days, this column looks at controversies unfolding on other tech platforms. Today, let’s take a look at the one that hosts this publication: Substack.

On Tuesday, I told subscribers that we are considering leaving the platform based on the company’s recent statement that it would not demonetize or remove openly Nazi accounts. After Jonathan M. Katz’s November article investigating extremism on the platform in The Atlantic, 247 Substack writers published an open letter asking the company to clarify its policies.

A few days later, Substack co-founder Hamish McKenzie responded in a blog post. While the platform would remove publications that are found to make credible threats of violence — a high bar — Substack would otherwise leave them alone, he said. “We don't think that censorship (including through demonetizing publications) makes the problem go away — in fact, it makes it worse,” he wrote. “We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power.”

McKenzie’s perspective — that sunlight is the best disinfectant, and that censorship backfires by making dangerous ideas seem more appealing — is reasonable for many or even most circumstances. It is a point of view that informs policies at many younger, smaller tech platforms, owing both to the techno-libertarian streak that runs through many founders in Silicon Valley and the fact that a hands-off approach to content moderation is easier and less expensive than the alternatives.

There was a time when even Facebook, which has more restrictive policies than Substack does across the board, permitted users to deny the Holocaust. CEO Mark Zuckerberg occasionally cited this policy as evidence of the company’s commitment to free speech, even though it occasionally got him into trouble.

Then, in 2020, Facebook reversed course: going forward, it said, it would remove Holocaust denial from the platform. In doing so, Zuckerberg said, Facebook was seeking to keep pace with the changing times.

Here’s Sheera Frenkel in the New York Times:

In announcing the change, Facebook cited a recent survey that found that nearly a quarter of American adults ages 18 to 39 said they believed the Holocaust either was a myth or was exaggerated, or they weren’t sure whether it happened.

“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust,” Mr. Zuckerberg wrote in ...