← Back to Library

Substack says it will remove Nazi publications from the platform

Substack's sudden pivot from defending Nazi publications to removing them isn't just a policy tweak; it's a crack in the foundation of the platform's entire identity. Casey Newton exposes how a company built on the absolute freedom of speech is now forced to confront the reality that its "decentralized" model has become a pipeline for extremism. This isn't about one newsletter; it's about whether a business model can survive when its core virtue becomes its greatest liability.

The Illusion of Neutrality

Newton begins by dismantling Substack's official narrative. The company claims this isn't a reversal of principles, but merely a "reconsidering how it interprets its existing policies." As Newton points out, this semantic dance ignores the stark contradiction between their actions and their previous rhetoric. Just weeks ago, co-founder Chris McKenzie argued that "censorship (including through demonetizing publications) makes the problem worse," insisting that open discourse was the only way to "strip bad ideas of their power."

Substack says it will remove Nazi publications from the platform

The shift is abrupt, yet the underlying tension was always there. Newton notes that while Substack insists it won't proactively hunt down neo-Nazis, it will now ban those who issue "credible threats of physical harm." This creates a confusing standard where the platform claims to hate Nazis but refuses to act until violence is explicitly threatened. The company's statement admits, "We sincerely regret how this controversy has affected writers on Substack," a rare moment of contrition that highlights the depth of the internal fracture.

"We don't think that censorship... makes the problem go away. In fact, it makes it worse. We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power."

This quote from McKenzie, cited by Newton, now reads like a relic from a different era of the company. The argument that letting bad ideas fester is safer than removing them has clearly collapsed under the weight of advertiser pressure and writer exodus. Critics might argue that the line between "bad ideas" and "credible threats" is often blurry, and that waiting for violence to occur is a dangerous game. Newton suggests the platform is trying to have it both ways: claiming moral high ground while scrambling to fix a reputation problem.

The Algorithmic Trap

The most damning part of Newton's analysis is how Substack's own infrastructure undermines its free-speech branding. The platform was sold as a haven for writers to escape the engagement-hungry algorithms of social media. Yet, Newton reveals that Substack now sends users a "personalized, algorithmically ranked digest" and launched "Notes," a feature that functions much like Twitter. These tools don't just host fringe content; they actively promote it.

Newton writes, "The site's recommendations and social networking infrastructure is designed to enable individual publications to grow quickly." This is the crux of the problem. A laissez-faire approach works only if users are truly in control. But when an algorithm surfaces extremist content to new audiences, the platform is no longer neutral; it is an amplifier. Newton observes that the company's "outspoken embrace of fringe viewpoints all but ensures that the number of extremist publications on the platform will grow."

The irony is palpable. Substack argued that its subscription model made it different from social networks, yet it adopted the very mechanisms that make social networks toxic. As Newton puts it, "Each week, Substack sends users a personalized, algorithmically ranked digest of posts from writers they don't yet follow — a feature that can help fringe publications build larger audiences and make more money than they would otherwise." This structural flaw means that even if they ban the worst offenders, the system is rigged to create new ones.

"Substack has defended its approach by arguing that it is built differently from social networks, which optimize for engagement rather than subscription revenue."

Newton's framing here is sharp: the defense is no longer credible. The platform has become what it claimed to oppose. A counterargument worth considering is that any recommendation engine will inevitably surface controversial content, and that banning it entirely would stifle legitimate debate. However, Newton's evidence suggests the issue isn't just controversy, but the specific promotion of violent ideologies like the "great replacement theory" and anti-Semitism. The platform's tools are doing the heavy lifting for extremists, regardless of the company's stated intent.

The Unresolved Divide

The piece concludes with a sobering assessment of the future. Substack has removed the Nazi publications, but the fundamental conflict remains. Newton notes that the company is now in a "difficult position" where any move to moderate content risks alienating the very writers who joined for its hands-off approach. Conversely, doing nothing risks losing the writers who refuse to fund extremism.

Newton writes, "In coming days, explicitly Nazi publications on Substack are slated to disappear. But the greater divide within its user base over content moderation will remain." This is the true story: a platform trying to reconcile two incompatible visions of free speech. The company's attempt to appease everyone has left it vulnerable to both sides. As Newton observes, "The next time the company has a content moderation controversy — and it will — expect these tensions to surface again."

The human cost of this policy debate is real. Writers have left, subscribers have cancelled, and the platform's reputation as a safe harbor for fringe thinkers has been permanently tarnished. Newton's reporting makes it clear that Substack's experiment in radical free speech has hit a wall, and the company is now trying to rebuild its house while it's still on fire.

"The next time the company has a content moderation controversy — and it will — expect these tensions to surface again."

Bottom Line

Newton's piece is a masterclass in connecting corporate policy to structural failure, showing how Substack's algorithmic choices betrayed its free-speech ideals. The strongest part of the argument is the exposure of the platform's own tools as the primary engine of extremism, a nuance often missed in the shouting match over censorship. Its biggest vulnerability is the lack of a clear path forward; the company has removed the symptom but not the disease, leaving the fundamental tension between profit, promotion, and principle completely unresolved.

Sources

Substack says it will remove Nazi publications from the platform

by Casey Newton · Platformer · Read full article

Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies.

As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.

The company will not change the text of its content policy, it says, and its new policy interpretation will not include proactively removing content related to neo-Nazis and far-right extremism. But Substack will continue to remove any material that includes “credible threats of physical harm,” it said.

In a statement, Substack’s co-founders told Platformer:

If and when we become aware of other content that violates our guidelines, we will take appropriate action. 

Relatedly, we’ve heard your feedback about Substack’s content moderation approach, and we understand your concerns and those of some other writers on the platform. We sincerely regret how this controversy has affected writers on Substack. 

We appreciate the input from everyone. Writers are the backbone of Substack and we take this feedback very seriously. We are actively working on more reporting tools that can be used to flag content that potentially violates our guidelines, and we will continue working on tools for user moderation so Substack users can set and refine the terms of their own experience on the platform. 

Substack’s statement comes after weeks of controversy related to the company’s mostly laissez-faire approach to content moderation. 

In November, Jonathan M. Katz published an article in The Atlantic titled “Substack Has a Nazi Problem.” In it, he reported that he had identified at least 16 newsletters that depicted overt Nazi symbols, and dozens more devoted to far-right extremism. 

Last month, 247 Substack writers issued an open letter asking the company to clarify its policies. The company responded on December 21, when Substack co-founder published a blog post arguing that “censorship” of Nazi publications would only make extremism worse.

McKenzie also wrote that “we don’t like Nazis either” and said Substack wished “no-one held those views.” But “we don't think that censorship (including through demonetizing publications) makes the problem go away,” he wrote. “In fact, it makes it worse. We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip ...