The internet has revolutionized how outsiders break into public discourse. A generation ago, an irreverent economist could never have gained influence through traditional gatekeepers. The media worlds of 1991 and 1971 would have made such a path nearly impossible—long slogs through editors who embodied conventional wisdom, with talk radio as the only alternative for independent voices.
But social media changed everything. And not entirely for the better.
The phenomenon of outsiders breaking into discourse with aggression and attention-seeking has gone too far. Social media, far more than the traditional media it replaced, has elevated divisive voices and bad actors. Research by Bor and Petersen (2021) demonstrates that social media draws malignant, status-seeking individuals who use hostility to get attention and power. Their study across eight studies, leveraging cross-national surveys and behavioral experiments with 8,434 participants, found that hostile political discussions result from status-driven individuals drawn to politics who are equally hostile both online and offline. Online discussions feel more hostile in part because the behavior of such individuals is more visible online than offline.
Spreading hate and divisiveness on social media has become a form of entrepreneurship. As Eugene Wei has written, social media is all about getting social status. Ten thousand followers on X may not sound like a media empire to rival CBS News, but for most people it's more attention than they would otherwise get in their entire life. For malignant individuals who crave status and attention and enjoy spreading fear and hate, social media is a natural platform for their dark dreams.
The psychology of viral content tends to spread negativity more than positivity. Research by Knutson et al. (2024) analyzed the sentiment of approximately thirty million posts from 182 U.S. news sources over a decade (2011-2020). Biased news sources on both left and right produced more high arousal negative affective content than balanced sources. High arousal negative content also increased reposting for biased versus balanced sources. Over a decade, the virality of high arousal negative affective content increased, particularly in posts about politics.
Brady et al. (2021) found that social media outrage is a self-reinforcing process. Their research showed how social learning processes amplify online moral outrage expressions over time. In two observational studies on Twitter with 7,331 users and 12.7 million total tweets, and two behavioral experiments with 240 participants, positive social feedback for outrage expressions increases the likelihood of future outrage expressions—consistent with principles of reinforcement learning.
Watson et al. (2024) demonstrated that news-related social media posts using negative language are reposted more, rewarding users who produce negative content. Data from four U.S. and UK news sites encompassing 95,282 articles and two social media platforms with 579 million posts show social media users are 1.91 times more likely to share links to negative news articles. Users show a greater inclination to share negative articles referring to opposing political groups. Negativity amplifies news dissemination on social media to a greater extent when accounting for the resharing of user posts containing article links.
Milli et al. (2024) found that relative to a reverse-chronological baseline, Twitter's engagement-based ranking algorithm amplifies emotionally charged out-group hostile content that users say makes them feel worse about their political out-group. Algorithmic feeds tend to increase political polarization.
The rise of social media created a revolution in political discourse. The old-school monopoly of big newspapers and TV stations—already under strain from the web and increased entry—was overthrown by wannabe influencers using divisiveness, partisanship, tribalism, and negative emotions to get attention and status. These individuals form what we might call the Shouting Class.
The most successful among them include people like Nicholas Fuentes, a literal Hitler supporter who has called for women to be sent to gulags; Candace Owens, a conspiracy theorist and antisemite; and Hasan Piker, who said America deserved the 9/11 attacks. The real damage is done by the vast legions of smaller-time shouters, all dreaming of becoming the next Fuentes or Owens.
Regular people know that social media is ruled by monsters great and small. A poll from 2020 shows that Americans think social media has a negative effect on their society. A recent poll shows that Americans trust social media less than just about any other institution.
Americans are increasingly getting off social media. But because normal, moderate Americans are leaving first, this cedes the field of influence to the extremists. Research by Törnberg (2025) shows that overall platform use has declined, with the youngest and oldest Americans increasingly abstaining from social media altogether. Facebook, YouTube, and Twitter/X have lost ground while TikTok and Reddit have grown modestly. Across platforms, political posting remains tightly linked to affective polarization as the most partisan users are also the most active. As casual users disengage and polarized partisans remain vocal, the online public sphere grows smaller, sharper, and more ideologically extreme.
This is not the first time new media technologies have opened up opportunities for divisive entrepreneurs to use hate and fear to boost their careers. Consider Charles Coughlin, a right-wing radio host in the 1930s who called for an end to democracy and labeled Hitler a hero. Coughlin's ideas are recognizably similar to those of Fuentes or Tucker Carlson today—he used a new media technology (radio) and constant negativity to break into public consciousness.
Why did the Charles Coughlins give way to the staid, centrist Big Media of the mid-20th century? Monopoly power. Big newspapers gradually built local monopolies that made it hard for upstarts to break in using sensationalism. Limited spectrum availability insulated broadcast TV stations and radio stations from competition. Those gatekeepers inevitably lost power as new technologies allowed new entrants inside the walls.
Cable TV led to talk show hosts like Sean Hannity, Tucker Carlson, and Rachel Maddow. Talk radio led to Rush Limbaugh and Michael Savage. The web led to blogs like the Drudge Report. All these new entrants used divisiveness and negative emotion to break in. Social media just supercharged the process.
Arguably, American society hasn't recovered from the blow that the rise of social media dealt it. Other societies seem to be a little more insulated from social media's deleterious effects due to their greater homogeneity and centralization—but only a bit. The problem is global.
The question now is what can save us from the tyranny of the Shouting Class. Who can be the next Walter Cronkite?
AI offers one potential solution. Anyone who has used X has noticed the "call Grok" feature. If you're a premium subscriber, you can always just tag Elon Musk's favorite LLM and get it to answer questions and deliver relevant facts.
Dan Williams writes that this type of LLM fact-checking will reintroduce expertise and technocratic fact-based analysis back into public discussions. First, unlike human experts, LLMs can rapidly deploy encyclopedic knowledge to answer people's idiosyncratic questions. Their responses can be probed, scrutinized, and questioned without them ever getting tired or frustrated. They won't just tell you that there is no persuasive evidence for a link between vaccines and autism—they can carefully walk you through the kinds of evidence we have and address your specific sources of skepticism. This partly explains why they can be highly persuasive even in correcting conspiratorial beliefs that many assumed were beyond the reach of rational persuasion.
Second, LLMs typically share information politely and respectfully. This not only differs from the performative, gladiatorial character of much debate and discussion on social media platforms but also improves on much communication by human experts. Being human, experts are often biased, partisan, and simply annoying, and when they seek to educate the public, it can be perceived—and sometimes intended—as condescending and rude. In contrast, LLMs deliver expert opinion without such status threats.
There is evidence that this works despite widespread worry that AI will become a machine for confirmation bias—simply telling people what they want to hear. Renault et al. (2026) found that Grok is actually a decent fact-checker using an exhaustive dataset of 1.7 million English-language fact-checking requests made to Grok and Perplexity on X between February and September 2025—the first large-scale empirical analysis of how LLM-based fact-checking operates in the wild.
Across posts rated by both LLM bots, evaluations from Grok and Perplexify agree 52.6% of the time and strongly disagree (one party rates a claim as true and the other as false) 13.6% of the time. For a sample of 100 fact-checked posts, 54.5% of Grok bot ratings and 57.7% of Perplexity bot ratings agreed with ratings of human fact-checkers—which is significantly lower than the inter-fact-checker agreement rate of 64.0%. But API-access versions of Grok had higher agreement with fact-checkers than did not significantly differ from inter-fact-checker agreement.
In a preregistered survey experiment with 1,592 U.S. participants, exposure to LLM fact-checks meaningfully shifts belief accuracy with effect sizes comparable to those observed in studies of professional fact-checking. Although Elon Musk has tirelessly worked to make Grok less woke, Renault et al. find that the AI is more likely to correct Republican posts than Democratic ones. While that doesn't necessarily mean that reality has a liberal bias, it does show that the people who create LLMs have difficulty imparting their political bias to their creations.
Costello et al. (2024) also find that talking to AI makes people believe less in conspiracy theories.
Smith is hopeful that LLMs will become fact-checking machines and dispensers of expertise-on-demand. But he thinks there's a far more important reason why they could recapture our political discourse from the Shouting Class—because of the way they're trained, LLMs will be a force for homogenization and moderation of public discourse.