While headlines screamed about an AI-driven election apocalypse, the data tells a far more mundane, yet critical story. Arvind Narayanan dismantles the prevailing panic by revealing that the vast majority of AI use in 2024 elections was either non-deceptive or easily replicable with cheap, old-school editing tools. This analysis is essential for busy leaders who need to distinguish between technological hype and the actual, structural rot in our information ecosystems.
The Myth of the AI Apocalypse
Narayanan begins by challenging the alarmist narrative that dominated the news cycle. He notes that the World Economic Forum claimed "misinformation and disinformation is the most severe short-term risk the world faces" and that "AI is amplifying manipulated and distorted information that could destabilize societies." Yet, when Narayanan and his team analyzed every known instance of AI use in the 2024 global elections, the predicted tidal wave never materialized. Instead of a new era of deception, they found a landscape where half of the AI-generated content had no intent to deceive at all.
The author categorizes these instances with surgical precision. In many cases, AI was used transparently to translate speeches for non-native speakers or to help candidates with laryngitis communicate with voters. In Venezuela, journalists utilized AI avatars to avoid government retribution. Narayanan writes, "We even found examples of deepfakes that we think helped improve the information environment." This reframing is powerful because it forces us to see AI not as a monolithic villain, but as a tool whose impact depends entirely on the intent of the user. The panic, he suggests, is a distraction from the real, long-standing issues of media literacy and institutional trust.
"The alarm about AI might be comforting because it positions concerns about the information environment as a discrete problem with a discrete solution."
This observation cuts deep. It suggests that our fixation on AI is a psychological coping mechanism, allowing us to believe that if we just regulate the technology, the problem of lies will vanish. Narayanan argues that this is a fundamental error. The solution to a broken information environment requires structural and institutional changes, not just curbing a specific type of software.
The Rise of the "Cheap Fake"
Perhaps the most striking finding is that sophisticated AI is not required to manipulate voters. Narayanan's team estimated the cost of creating similar deceptive content without AI for every case of malicious intent. The result? "In each case, the cost of creating similar content without AI was modest—no more than a few hundred dollars." The barrier to entry for high-quality disinformation has not been lowered by AI; it was already low.
The article highlights the prevalence of "cheap fakes"—videos slowed down, jump cuts that change meaning, or photoshopped images. In the U.S., a video of Vice President Kamala Harris was altered to make her appear slurring her words, a trick achievable with basic editing software. Similarly, in India, a jump cut removed the word "not" from a candidate's statement, flipping the entire meaning of his sentence. Narayanan points out that the News Literacy Project found "cheap fakes were used seven times more often than AI-generated content" in the U.S. election.
This evidence holds up because it aligns with historical patterns of propaganda. The technology changes, but the human psychology of manipulation remains constant. A counterargument worth considering is that while cheap fakes are common, AI could eventually lower the cost of scale so drastically that it overwhelms fact-checkers. However, Narayanan counters this by noting that even the most high-profile AI incidents, like the Biden robocall in New Hampshire, could have been executed by hiring voice actors. The FCC fined the perpetrator $6 million, proving that existing legal frameworks, not new AI bans, are the real deterrent.
The Demand Side of Lies
The core of Narayanan's argument shifts from supply to demand. He posits that misinformation spreads not because it is technologically advanced, but because it feeds into pre-existing worldviews. "Looking at the demand for misinformation tells us that as long as people have certain worldviews, they will seek out and find information consistent with those views," he writes. This is a crucial distinction. Successful misinformation operations target in-group members who are already predisposed to believe the message.
The author argues that "sophisticated tools aren't needed for misinformation to be effective in this context." Whether it is a deepfake or a grainy video game clip, the content only needs to be good enough to confirm a bias. This explains why low-quality content often spreads faster than high-quality fact-checks. The demand is saturated; the supply is just competing for the same eyeballs. Narayanan observes that in polarized countries, "AI misinformation had much less impact than feared" because the electorate was already deeply entrenched in their respective information bubbles.
"Increasing the supply of misinformation does not meaningfully change the dynamics of the demand for misinformation since the increased supply is competing for the same eyeballs."
This perspective challenges the industry's obsession with content moderation and algorithmic tweaks. If the problem is the audience's demand for confirmation, then simply removing the supply of AI-generated lies will not stop the flow of deception. Critics might argue that this view underestimates the potential for AI to create highly personalized, persuasive content that could break through these bubbles. Yet, the 2024 data suggests that even with the tools available, the barrier to changing a voter's mind remains high, regardless of the medium.
A Century of Panic
Narayanan closes by placing the current AI panic in a long historical context. He notes that "concerns about using new technology to create false information go back over a century," citing 19th-century fears about photo retouching and a 1912 bill that would have criminalized editing photos without consent. The pattern repeats with every new tool: GPT-2 in 2019, LLaMA in 2023, and now smartphone editing tools. In each case, the predicted deluge of voter persuasion failed to materialize.
The author writes, "Thinking of political misinformation as a technological (or AI) problem is appealing because it..." (the text cuts off, but the implication is clear: it simplifies a complex societal issue). This historical lens is vital. It reminds us that the fear of technology is often a proxy for deeper anxieties about social change. The real danger isn't the tool; it's our refusal to address the underlying polarization and institutional decay that make misinformation effective.
Bottom Line
Narayanan's most compelling contribution is the shift from a supply-side to a demand-side analysis of misinformation, proving that AI is a symptom, not the cause, of our fractured information environment. The argument's greatest vulnerability is the assumption that human psychology will remain static against increasingly sophisticated, hyper-personalized AI agents. However, the evidence from 2024 strongly suggests that our focus on technology is a distraction from the harder, necessary work of rebuilding trust and addressing the root causes of polarization.