← Back to Library

We looked at 78 election deepfakes. Political misinformation is not an AI problem

While headlines screamed about an AI-driven election apocalypse, the data tells a far more mundane, yet critical story. Arvind Narayanan dismantles the prevailing panic by revealing that the vast majority of AI use in 2024 elections was either non-deceptive or easily replicable with cheap, old-school editing tools. This analysis is essential for busy leaders who need to distinguish between technological hype and the actual, structural rot in our information ecosystems.

The Myth of the AI Apocalypse

Narayanan begins by challenging the alarmist narrative that dominated the news cycle. He notes that the World Economic Forum claimed "misinformation and disinformation is the most severe short-term risk the world faces" and that "AI is amplifying manipulated and distorted information that could destabilize societies." Yet, when Narayanan and his team analyzed every known instance of AI use in the 2024 global elections, the predicted tidal wave never materialized. Instead of a new era of deception, they found a landscape where half of the AI-generated content had no intent to deceive at all.

We looked at 78 election deepfakes. Political misinformation is not an AI problem

The author categorizes these instances with surgical precision. In many cases, AI was used transparently to translate speeches for non-native speakers or to help candidates with laryngitis communicate with voters. In Venezuela, journalists utilized AI avatars to avoid government retribution. Narayanan writes, "We even found examples of deepfakes that we think helped improve the information environment." This reframing is powerful because it forces us to see AI not as a monolithic villain, but as a tool whose impact depends entirely on the intent of the user. The panic, he suggests, is a distraction from the real, long-standing issues of media literacy and institutional trust.

"The alarm about AI might be comforting because it positions concerns about the information environment as a discrete problem with a discrete solution."

This observation cuts deep. It suggests that our fixation on AI is a psychological coping mechanism, allowing us to believe that if we just regulate the technology, the problem of lies will vanish. Narayanan argues that this is a fundamental error. The solution to a broken information environment requires structural and institutional changes, not just curbing a specific type of software.

The Rise of the "Cheap Fake"

Perhaps the most striking finding is that sophisticated AI is not required to manipulate voters. Narayanan's team estimated the cost of creating similar deceptive content without AI for every case of malicious intent. The result? "In each case, the cost of creating similar content without AI was modest—no more than a few hundred dollars." The barrier to entry for high-quality disinformation has not been lowered by AI; it was already low.

The article highlights the prevalence of "cheap fakes"—videos slowed down, jump cuts that change meaning, or photoshopped images. In the U.S., a video of Vice President Kamala Harris was altered to make her appear slurring her words, a trick achievable with basic editing software. Similarly, in India, a jump cut removed the word "not" from a candidate's statement, flipping the entire meaning of his sentence. Narayanan points out that the News Literacy Project found "cheap fakes were used seven times more often than AI-generated content" in the U.S. election.

This evidence holds up because it aligns with historical patterns of propaganda. The technology changes, but the human psychology of manipulation remains constant. A counterargument worth considering is that while cheap fakes are common, AI could eventually lower the cost of scale so drastically that it overwhelms fact-checkers. However, Narayanan counters this by noting that even the most high-profile AI incidents, like the Biden robocall in New Hampshire, could have been executed by hiring voice actors. The FCC fined the perpetrator $6 million, proving that existing legal frameworks, not new AI bans, are the real deterrent.

The Demand Side of Lies

The core of Narayanan's argument shifts from supply to demand. He posits that misinformation spreads not because it is technologically advanced, but because it feeds into pre-existing worldviews. "Looking at the demand for misinformation tells us that as long as people have certain worldviews, they will seek out and find information consistent with those views," he writes. This is a crucial distinction. Successful misinformation operations target in-group members who are already predisposed to believe the message.

The author argues that "sophisticated tools aren't needed for misinformation to be effective in this context." Whether it is a deepfake or a grainy video game clip, the content only needs to be good enough to confirm a bias. This explains why low-quality content often spreads faster than high-quality fact-checks. The demand is saturated; the supply is just competing for the same eyeballs. Narayanan observes that in polarized countries, "AI misinformation had much less impact than feared" because the electorate was already deeply entrenched in their respective information bubbles.

"Increasing the supply of misinformation does not meaningfully change the dynamics of the demand for misinformation since the increased supply is competing for the same eyeballs."

This perspective challenges the industry's obsession with content moderation and algorithmic tweaks. If the problem is the audience's demand for confirmation, then simply removing the supply of AI-generated lies will not stop the flow of deception. Critics might argue that this view underestimates the potential for AI to create highly personalized, persuasive content that could break through these bubbles. Yet, the 2024 data suggests that even with the tools available, the barrier to changing a voter's mind remains high, regardless of the medium.

A Century of Panic

Narayanan closes by placing the current AI panic in a long historical context. He notes that "concerns about using new technology to create false information go back over a century," citing 19th-century fears about photo retouching and a 1912 bill that would have criminalized editing photos without consent. The pattern repeats with every new tool: GPT-2 in 2019, LLaMA in 2023, and now smartphone editing tools. In each case, the predicted deluge of voter persuasion failed to materialize.

The author writes, "Thinking of political misinformation as a technological (or AI) problem is appealing because it..." (the text cuts off, but the implication is clear: it simplifies a complex societal issue). This historical lens is vital. It reminds us that the fear of technology is often a proxy for deeper anxieties about social change. The real danger isn't the tool; it's our refusal to address the underlying polarization and institutional decay that make misinformation effective.

Bottom Line

Narayanan's most compelling contribution is the shift from a supply-side to a demand-side analysis of misinformation, proving that AI is a symptom, not the cause, of our fractured information environment. The argument's greatest vulnerability is the assumption that human psychology will remain static against increasingly sophisticated, hyper-personalized AI agents. However, the evidence from 2024 strongly suggests that our focus on technology is a distraction from the harder, necessary work of rebuilding trust and addressing the root causes of polarization.

Sources

We looked at 78 election deepfakes. Political misinformation is not an AI problem

by Arvind Narayanan · AI Snake Oil · Read full article

AI-generated misinformation was one of the top concerns during the 2024 U.S. presidential election. In January 2024, the World Economic Forum claimed that “misinformation and disinformation is the most severe short-term risk the world faces” and that “AI is amplifying manipulated and distorted information that could destabilize societies.” News headlines about elections in 2024 tell a similar story:

In contrast, in our past writing, we predicted that AI would not lead to a misinformation apocalypse. When Meta released its open-weight large language model (called LLaMA), we argued that it would not lead to a tidal wave of misinformation. And in a follow-up essay, we pointed out that the distribution of misinformation is the key bottleneck for influence operations, and while generative AI reduces the cost of creating misinformation, it does not reduce the cost of distributing it. A few other researchers have made similar arguments.

Which of these two perspectives better fits the facts?

Fortunately, we have the evidence of AI use in elections that took place around the globe in 2024 to help answer this question. Many news outlets and research projects have compiled known instances of AI-generated text and media and their impact. Instead of speculating about AI’s potential, we can look at its real-world impact to date.

We analyzed every instance of AI use in elections collected by the WIRED AI Elections Project, which tracked known uses of AI for creating political content during elections taking place in 2024 worldwide. In each case, we identified what AI was used for and estimated the cost of creating similar content without AI.

We find that (1) half of AI use isn't deceptive, (2) deceptive content produced using AI is nevertheless cheap to replicate without AI, and (3) focusing on the demand for misinformation rather than the supply is a much more effective way to diagnose problems and identify interventions.

To be clear, AI-generated synthetic content poses many real dangers: the creation of non-consensual images of people and child sexual abuse material and the enabling of the liar’s dividend, which allows those in power to brush away real but embarrassing or controversial media content about them as AI-generated. These are all important challenges. This essay is focused on a different problem: political misinformation.1

Improving the information environment is a difficult and ongoing challenge. It’s understandable why people might think AI is making the problem worse: AI does make it possible ...