← Back to Library

Restricting speech by purportedly protecting children

This piece delivers a startling reality check: the most potent arguments for censoring speech today are not coming from authoritarians, but from well-meaning democracies desperate to protect children. While the moral panic feels urgent, the article exposes a dangerous pattern where vague fears of online harm are being weaponized to dismantle the First Amendment, a strategy that has already backfired in courts from Des Moines to Utah.

The Historical Pattern of Moral Panic

The article opens by dismantling the assumption that child-safety rhetoric is a uniquely modern or American phenomenon. It correctly identifies that governments have long used the protection of minors as a pretext for broader censorship. Reason reports, "Censoring speech in the name of protecting children is not a terribly new phenomenon, especially in authoritarian countries." The piece cites the 2012 Russian law that allowed the media censorship agency to blacklist websites without court approval, noting that civil liberties groups correctly predicted these powers would be used to curb far more speech than just content harmful to children.

Restricting speech by purportedly protecting children

This historical context is crucial because it reframes current US efforts not as isolated policy experiments, but as part of a global drift toward authoritarian overreach. The argument gains weight when it traces the Supreme Court's consistent resistance to this logic over decades. The editors highlight the 1997 ruling on the Communications Decency Act, where the court invalidated criminal penalties for "indecent" content, writing that "the interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship." This precedent is particularly relevant given the companion deep dives on the Communications Decency Act, reminding us that the legal system has repeatedly rejected the idea that the government can act as a universal filter for children.

The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.

Critics might argue that the digital landscape has changed fundamentally since 1997, making old precedents less applicable to algorithmic feeds. However, the piece effectively counters this by showing that the core constitutional principle remains unchanged: the government cannot restrict ideas simply because it fears they might be harmful.

The Utah Case Study and the Illusion of Precision

The commentary then pivots to the cutting edge of this conflict: Utah's Minor Protection in Social Media Act. The piece details how the state legislature, citing "addictive design features," mandated age assurance systems and forced platforms to disable features like autoplay for minors. The law's definition of a "social media company" is described as a "public website or application" that allows users to interact socially and create public accounts.

Reason argues that the law's fatal flaw is its vagueness and its content-based nature. NetChoice, the trade group suing the state, is quoted describing the statute as a "haphazard regulation" that creates a "fundamental mismatch between the State's putative goals... and the Act's haphazard regulation of certain websites." The piece emphasizes that the law singles out specific platforms while ignoring others that use the exact same means of disseminating speech.

The federal judge who blocked the law, Robert J. Shelby, is central to the article's analysis. His reasoning is quoted directly: "While Defendants present evidence suggesting parental controls are not in widespread use, their evidence does not establish parental tools are deficient. It only demonstrates parents are unaware of parental controls, do not know how to use parental controls, or simply do not care to use parental controls." This is a devastating critique of the state's logic. It shifts the burden from the government to the family unit, suggesting that the state's intervention is not a necessary safety net but a clumsy overreach that ignores the reality of parental agency.

The article also notes a critical weakness in the state's argument: the law "ultimately preserves minors' ability to spend as much time as they want on social media platforms." If the goal is to reduce screen time, the legislation fails its own test. Instead, it burdens speech without solving the underlying behavioral issue. This aligns with the 2011 Supreme Court ruling against banning violent video games, which stated the First Amendment does not give the government "a free-floating power to restrict the ideas to which children may be exposed."

The Global Drift and the Encryption Crisis

The scope of the commentary expands beyond Utah to the federal level and the United Kingdom, illustrating a coordinated global shift. The piece discusses the Kids Online Safety Act (KOSA) in Congress, which would impose a "duty of care" on platforms. Senator Richard Blumenthal is quoted defending the bill as a standard requirement: "Companies in every other industry in America are required to take meaningful steps to prevent users of their products from being hurt, and this simply extends that same kind of responsibility to social media companies, too."

However, the article immediately pivots to the free-speech consequences of this approach. Civil liberties groups, including the ACLU and the Electronic Frontier Foundation (EFF), warned in a July 2024 letter that these requirements would lead to "aggressive filtering of content by companies preventing access to important, First Amendment–protected, educational and even lifesaving content." This is the crux of the problem: when platforms face liability for any potential harm, they will inevitably over-censor to protect themselves, silencing legitimate discourse on sensitive topics like eating disorders or suicide.

The situation is even more dire in the UK with the Online Safety Act. The piece highlights the threat to end-to-end encryption, noting that the law allows the regulator, Ofcom, to compel platforms to search for illegal content. The EFF is quoted stating, "Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved." This connects directly to the companion topic of age verification systems, where the requirement to verify age via government documents or biometric data poses a "serious threat to the privacy of UK internet users."

Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved.

The article concludes by pointing to the UK Secretary of State Peter Kyle's November 2024 policy paper, which calls for robust countermeasures against "disinformation." The piece notes the dangerous vagueness here: Kyle did not define disinformation or explain who determines it. This lack of specificity is the ultimate danger. As the editors note, these laws "empower large bureaucracies to claim sweeping mandates to decide what sorts of content are too harmful to be on the internet."

Bottom Line

The strongest part of this argument is its relentless focus on the mechanism of censorship: how vague definitions and unproven harms are used to justify sweeping restrictions that inevitably target protected speech. The piece's biggest vulnerability is its reliance on the assumption that parents are always the best arbiters of content, which may overlook the reality of digital addiction in children who lack parental oversight. Readers should watch for the upcoming Tenth Circuit ruling on the Utah law, which will likely set the tone for how federal courts handle the wave of similar state and federal legislation currently in motion.

Deep Dives

Explore these related deep dives:

  • The Age of Surveillance Capitalism Amazon · Better World Books by Shoshana Zuboff

    How tech companies turned human experience into raw material for prediction and control.

  • Age verification system

    This technical mechanism is the specific enforcement tool mandated by the Utah law, illustrating how vague 'protection' mandates force platforms to deploy invasive identity verification systems that often fail to distinguish between minors and adults.

  • Communications Decency Act

    The article cites this 1996 law as a historical precedent where the Supreme Court struck down broad censorship under the guise of protecting children, providing the legal blueprint for why current social media regulations face similar constitutional hurdles.

  • Online Safety Act 2023

    Referenced as a Western democratic counterpart to the Utah legislation, this law demonstrates how the 'protect children' justification is being used globally to impose a duty of care that effectively mandates proactive content monitoring and algorithmic censorship.

Sources

Restricting speech by purportedly protecting children

by Various · Reason · Read full article

While governments around the world have imposed speech restrictions to fight misinformation and hate speech, they also have attempted to curb free speech for a less controversial reason: protecting children. But many of these restrictions stem from vague, unspecified, or speculative harms and corral wide swaths of speech that do not harm children. Censoring speech in the name of protecting children is not a terribly new phenomenon, especially in authoritarian countries. In 2012, for instance, Russia's parliament passed a law allowing the country's media censorship agency to unilaterally blacklist websites and take them offline, without any court approval. The lawmakers' justification was protecting children from online harm, but civil liberties groups correctly predicted that the government would use these powers to curb far more speech. In recent years, such efforts have moved beyond authoritarian countries and taken hold in Western democracies.

The United States has seen repeated attempts to curb speech in the name of saving the children. Although they have failed, governments have continued to try over many decades. In 1969, the US Supreme Court struck down the Des Moines, Iowa, school district's ban on black armbands worn to protest the Vietnam War, writing that "state-operated schools may not be enclaves of totalitarianism." In 1997, the Supreme Court invalidated much of the Communications Decency Act, which criminalized the online transmission of "indecent" content to minors, writing that the "interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship." And in 2011, the court struck down a California law that banned sales of "violent video games" to minors, writing that the First Amendment does not give the government "a free-floating power to restrict the ideas to which children may be exposed."

The moral panic did not stop with those cases. Across the country, states are scrambling to address the harms associated with minors' use of social media. Many high-profile commentators and politicians have criticized social media for harming the mental health of teenagers, though there is substantial debate as to whether they have presented sufficient evidence of causation. In May 2023, then-Surgeon General Vivek Murthy issued an advisory on social media and youths' mental health: "The most common question parents ask me is, 'Is social media safe for my kids?' The answer is that we don't have enough evidence to say it's safe, and in fact, there is growing evidence that social ...