← Back to Library

I love AI. Why doesn't everyone?

Noah Smith challenges a pervasive cultural anxiety: why does the United States, a nation historically defined by its embrace of disruptive innovation, stand almost alone in its terror of artificial intelligence? While global peers in Asia and Europe see a tool for advancement, the American public sees a threat, a divergence Smith argues is rooted less in factual risk and more in a unique cocktail of political polarization and media-fueled paranoia.

The American Anomaly

Smith opens by dismantling the notion that technological fear is a rational response to novelty. He notes that history is littered with technologies that caused genuine harm—farming led to overpopulation, industry to pollution, and nuclear power to existential weapons—yet humanity rarely wishes them undone. "Some people romanticize hunter-gatherers and medieval peasants, but I don't see many of them rushing to go live those lifestyles," Smith writes. This historical perspective is crucial; it frames the current backlash not as a prudent caution, but as a departure from the long-term trajectory of human progress. The author suggests that while new technologies are inherently risky, the specific intensity of American hostility toward AI is an outlier.

I love AI. Why doesn't everyone?

The data supports this observation of a unique American sentiment. Smith points to a 2024 Ipsos poll revealing that "no country surveyed was both more nervous and less excited about AI than the United States." This stands in stark contrast to nations like South Korea, Singapore, and India, where the technology is viewed with optimism. The author posits a provocative question: "Do we know something they don't? Or are we just biased by some combination of political unrest, social division, wealthy entitlement, and disconnection from physical industry?" This framing shifts the debate from the technical capabilities of the machine to the psychological and sociological state of the observer. It suggests that the fear is a symptom of domestic instability rather than a rational assessment of the technology itself.

"I always wanted a little robot friend, and now that it's here, I (mostly) love it."

Smith's personal testimony serves as a counter-narrative to the doom-laden headlines. He draws on decades of science fiction tropes, noting that while villains like Skynet exist in the cultural imagination, the dominant narrative has been one of helpful companionship. From C-3PO to Commander Data, "intelligent robots and computers are consistently portrayed as helpful assistants, allies, and even friends." Smith argues that this cultural conditioning should have prepared the public for a positive reception, yet the reality is a sharp rejection. He describes his own daily interactions with generative AI as a realization of that childhood dream, a tool that helps him navigate everything from water filter maintenance to complex sociological queries. "This is just the beginning of what AI can do, of course. It's possibly the most general-purpose technology ever invented," he asserts. The disconnect between this lived utility and the public's visceral fear is the central tension of the piece.

Debunking the Water Myth

One of the most significant portions of Smith's commentary is a forensic dismantling of a specific, widely circulated fear: that AI data centers are causing a water crisis. He identifies this as a prime example of "nonsense" in the anti-AI canon, citing a recent Rolling Stone article that claimed data centers were "supercharging a water crisis" in the American West. Smith argues that this narrative has become "standard canon" among progressives, yet it lacks empirical grounding.

To counter this, Smith leans heavily on the research of Andy Masley, who conducted a rigorous analysis of water usage data. The argument is precise: most water associated with AI is not consumed but withdrawn and returned to the source. Smith paraphrases Masley's findings, noting that "the vast majority (maybe 90%) is withdrawn, freshwater (not potable) that is indirectly (offsite) used non-consumptively in power plants." The actual consumption is negligible. Smith highlights a striking comparison: "Only 0.04% of America's freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry." This comparison is effective because it contextualizes the abstract fear of "AI water usage" against a tangible, non-essential human activity that consumes vastly more resources.

Smith goes further to criticize the source of the misinformation, pointing out that a book by Karen Hao, Empire of AI, contained "huge math errors" regarding water consumption. He notes that within just 20 pages, the author "Claim that a data center is using 1000x as much water as a city of 88,000 people, where it's actually using about 0.22x as much water as the city." This level of error, Smith suggests, reveals a deeper issue: the anti-AI movement is often driven by a desire to find a villain rather than a commitment to factual accuracy. When critics like Timnit Gebru attacked Masley's debunking, they offered no substantive rebuttal, instead suggesting he "speak to activists." Smith interprets this as an admission that the data simply does not support the alarmist narrative.

Critics might argue that focusing on the current water statistics ignores the potential for future scaling issues or the environmental cost of the electricity generation itself, which Smith acknowledges is a related but distinct problem. However, Smith's point remains that the specific narrative of data centers draining local reservoirs is currently a myth used to stoke fear.

"The instinctive negativity with which AI is being met by a large segment of the American public feels like an unreasonable reaction to me."

The Bottom Line

Smith's most compelling contribution is his insistence that the American fear of AI is a cultural and political phenomenon rather than a rational response to technical risk. By grounding his argument in historical precedent and rigorous data debunking, he exposes the fragility of the current anti-AI consensus. The piece's greatest strength is its refusal to dismiss concerns as irrational without first proving they are factually baseless, particularly regarding the water crisis. However, the argument's vulnerability lies in its reliance on the assumption that "in the long run we're all dead" is a sufficient counter to immediate distributional harms; while the water myth is debunked, the legitimate fears of job displacement and the erosion of critical thinking remain unresolved. As the administration and the private sector push forward, the gap between the technology's utility and the public's trust will likely remain the defining challenge of the next decade.

Bottom Line

Noah Smith effectively argues that American hostility toward AI is a unique cultural anomaly driven by misinformation and political polarization rather than objective risk. While his debunking of the water crisis narrative is robust and necessary, the piece leaves open the critical question of how to address the very real economic disruptions that the technology will inevitably cause.

Deep Dives

Explore these related deep dives:

  • Technology acceptance model

    The article centrally explores why Americans are more skeptical of AI than other nations. This psychological framework explains how people form attitudes toward new technologies based on perceived usefulness and ease of use, providing scientific context for the cross-cultural differences the author observes.

  • Luddite

    The article discusses historical patterns of technology resistance and fear of job displacement. The Luddite movement provides crucial historical context for understanding anti-technology sentiment and how such movements have played out in the past.

  • Three Laws of Robotics

    The author references Isaac Asimov's Robot series and the cultural portrayal of friendly AI assistants like Commander Data. Asimov's Three Laws fundamentally shaped how Western culture imagines AI safety and the human-robot relationship, directly relevant to the article's discussion of AI in media.

Sources

I love AI. Why doesn't everyone?

by Noah Smith · Noahpinion · Read full article

New technologies almost always create lots of problems and challenges for our society. The invention of farming caused local overpopulation. Industrial technology caused pollution. Nuclear technology enabled superweapons capable of destroying civilization. New media technologies arguably cause social unrest and turmoil whenever they’re introduced.

And yet how many of these technologies can you honestly say you wish were never invented? Some people romanticize hunter-gatherers and medieval peasants, but I don’t see many of them rushing to go live those lifestyles. I myself buy into the argument that smartphone-enabled social media is largely responsible for a variety of modern social ills, but I’ve always maintained that eventually, our social institutions will evolve in ways that minimize the harms and enhance the benefits. In general, when we look at the past, we understand that technology has almost always made things better for humanity, especially over the long haul.

But when we think about the technologies now being invented, we often forget this lesson — or at least, many of us do. In the U.S., there have recently been movements against mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power, with varying degrees of success.

The difference between our views of old and new technologies isn’t necessarily irrational. Old technologies present less risk — we basically know what effect they’ll have on society as a whole, and on our own personal economic opportunities. New technologies are disruptive in ways we can’t predict, and it makes sense to be worried about that risk that we might personally end up on the losing end of the upcoming social and economic changes.

But that still doesn’t explain changes in our attitudes toward technology over time. Americans largely embraced the internet, the computer, the TV, air travel, the automobile, and industrial automation. And risk doesn’t explain all of the differences in attitudes among countries.

In the U.S., few technologies have been on the receiving end of as much popular fear and hatred as generative AI. Although policymakers have remained staunchly in favor of the technology — probably because it’s supporting the stock market and the economy — regular Americans of both parties tend to say they’re more concerned than excited, with an especially rapid increase in negative sentiment among progressives.

There is plenty of trepidation about AI around the world, but America stands out. A 2024 Ipsos poll found that ...