Noah Smith flips the script on the usual AI doomsday narrative, arguing that artificial intelligence might be the only force capable of silencing the toxic "Shouting Class" that has hijacked our public discourse. While others fear machines will amplify our worst impulses, Smith presents a startling case that Large Language Models could act as the modern Walter Cronkite—delivering reason, moderation, and fact-checking at a scale no human journalist ever could.
The Rise of the Shouting Class
Smith begins by admitting his own past as a "snarky" internet provocateur, noting that in the old media era, such a career path would have been impossible. "In the media world of 1971, forget about it — I would have zero chance of breaking in to a discourse dominated by broadcast TV and big newspapers," he writes. This admission sets the stage for his core thesis: the barrier to entry has collapsed, and the result is a flood of divisive voices. He argues that social media has turned hostility into a business model. "Basically, spreading hate and divisiveness on social media is a form of entrepreneurship," Smith observes, citing research that shows status-seeking individuals are drawn to politics specifically to spread fear.
The author marshals a heavy artillery of academic studies to prove that negativity is not just a byproduct of the internet, but its engine. He points out that biased news sources produce more "high arousal negative affective content" than balanced ones, and that this content gets reposted more often. "Together, these findings reveal that high arousal negative affective content may promote the spread of news from biased sources," he notes. The algorithms, he argues, are not neutral; they actively reward outrage. "Twitter's engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group," Smith writes, highlighting how the technology itself is designed to deepen polarization.
"The old-school monopoly of big newspapers and TV stations — already under strain from the Web and from increased entry and competition — was overthrown by a giant mob of wannabe influencers, using divisiveness, partisanship, ideology, tribalism and negative emotions to get attention and status."
Smith draws a sharp historical parallel to the 1930s radio host Charles Coughlin, who used the new technology of his day to call for an end to democracy and label Hitler a "hero." Just as Coughlin was eventually silenced by the gatekeepers of the mid-20th century, Smith suggests we are desperate for a similar reset. However, he warns that the "Shouting Class" includes figures like Nicholas Fuentes and Candace Owens, whose influence is now amplified by a platform that has no natural ceiling. A counterargument worth considering is whether any centralized authority, even an AI, can effectively moderate without becoming a tool for censorship or political bias, a risk Smith acknowledges but hopes to mitigate through the nature of the technology itself.
The AI as Digital Cronkite
Having established the problem, Smith pivots to his optimistic solution: AI as the great moderator. He dismisses the idea that platform owners like Elon Musk can fix this, noting that even Bluesky has struggled to halt its descent into madness. Instead, he looks to Large Language Models (LLMs) to reintroduce expertise. "First, unlike human experts, [LLMs] can rapidly deploy encyclopaedic knowledge to answer people's idiosyncratic questions," Smith writes. He argues that these bots can patiently walk users through evidence without the condescension or fatigue that often plagues human experts.
The evidence Smith cites is surprisingly robust. He references a study by Renault et al. (2026) which found that LLMs like Grok and Perplexity can shift belief accuracy with effect sizes comparable to professional fact-checkers. "In fact, although Elon has tirelessly worked to make Grok less 'woke', Renault et al. find that the AI is more likely to correct Republican posts than Democratic ones," Smith points out, suggesting that the technology resists political capture better than humans do. He also notes that talking to AI reduces belief in conspiracy theories. "Because of the way they're trained, LLMs will be a force for homogenization and moderation of," he concludes, implying that the very architecture of these models favors reason over rage.
Critics might argue that relying on algorithms to define "reasonableness" is a dangerous gamble, potentially creating a new kind of algorithmic tyranny. Yet, Smith's argument gains traction by contrasting the chaotic, status-driven nature of human influencers with the calm, data-driven responses of a machine. The comparison to the "Fairness Doctrine" era is implicit here; just as that policy attempted to balance viewpoints through regulation, Smith hopes AI can achieve balance through superior information processing.
"Social media is all about getting social status. 10,000 followers on X may not sound like a media empire to rival CBS News, but for most people it's more attention than they would otherwise get in their entire life."
Bottom Line
Smith's most compelling contribution is reframing AI not as a disruptor of truth, but as the only viable guardian of it in an era of infinite content. The argument's strength lies in its synthesis of behavioral psychology and technical capability, though it hinges on the assumption that AI can remain neutral and that users will actually listen to a machine over a charismatic human shouter. The biggest vulnerability remains the human element: if the audience prefers the adrenaline of outrage, even the most reasonable AI may struggle to compete.