← Back to Library

AI slopaganda in the kmt election

This piece cuts through the noise of the 2025 Kuomintang chairmanship election to reveal a disturbing new reality: the weaponization of artificial intelligence not just to lie, but to flood the zone with low-effort, high-volume propaganda. Jordan Schneider argues that the true threat isn't the quality of deepfakes, but their sheer scale and the strategic patience of their creators, who build audiences on apolitical content before pivoting to election interference. For anyone tracking the future of Indo-Pacific stability, this analysis exposes how easily the digital public square can be hijacked by actors who face zero consequences for their actions.

The Strategy of Scale Over Quality

Schneider dismantles the assumption that sophisticated disinformation requires Hollywood-level production values. Instead, the campaign behind the victory of Cheng Li-wun relied on what the author terms "slopaganda"—mass-produced, low-fidelity content designed to game algorithms rather than convince critical thinkers. "False flag operations, in which an actor attempts to actually convince the public that a fake event is real, are a much smaller piece of the pie," Schneider writes. "The main function of AI generation for propaganda has not been to increase quality or to convince viewers that the videos are real, but to increase scale by speeding up the creation process."

AI slopaganda in the kmt election

This distinction is crucial. The operation didn't try to fool experts; it tried to drown out nuance. The author details how channels spent months posting about Taiwanese natural beauty or heartwarming stories of strangers helping foreigners, only to abruptly switch to political advocacy. One channel, named Firefly, spent three months building trust before deploying a deepfaked woman to praise Cheng Li-wun. "The accounts build up viewership and favor with the algorithm before eventually posting content about Taiwanese politics," Schneider notes. This tactic exploits the very mechanisms social media platforms use to recommend content, turning engagement metrics into a weapon.

Critics might argue that attributing the entire outcome to these videos ignores the genuine political appeal of Cheng Li-wun, who campaigned effectively and secured support from established figures like Ma Ying-jeou. However, Schneider counters this by highlighting the timing: the surge of AI-generated support began precisely when Cheng was considered an underdog against the favored Hau Lung-pin. "She was not popular in the KMT," analyst Jerry Yu is quoted as saying. "He claimed that if the videos were not making a 'big difference,' then Cheng Li-wun would not have won."

"Influence campaigns target social media users who keep eating the slop no matter what it's filled with — like pigs to the slaughter."

The Mechanics of Coordinated Inauthentic Behavior

The evidence of coordination is stark. Schneider points out that the uniformity of these channels—many using simplified Chinese characters and identical deepfaked avatars—suggests a centralized operation originating from the mainland. The timing was too precise to be organic; at least one account released its first deepfake of Cheng on the exact day she announced her candidacy. "At least one account with the same essential style as all of the rest released its first deepfake model Cheng Li-wun video on September 17th, the day that Cheng Li-wun announced her candidacy," Schneider observes. "Just one channel mentioned her before she announced her candidacy... The new mentions of Cheng reached their peak just the week after her announcement."

This level of synchronization transforms the election from a democratic contest into a test of digital infrastructure resilience. The author notes that even after the election, these channels didn't disappear; they simply reverted to posting apolitical content, waiting for the next cycle. "We have an army of accounts that were once Cheng Li-wun keyboard warriors and will almost definitely return to influence an election when needed." This creates a permanent, latent threat to Taiwan's democratic processes, one that can be activated with a few clicks.

The article also touches on the broader context of Taiwan's identity politics. The KMT, historically the party of "Chinese identity," faces a public that increasingly identifies as Taiwanese. Cheng Li-wun's campaign tried to bridge this by advocating for a stance of "not kneeling to America" while embracing Chinese heritage, a message that resonated with a specific demographic but alienated others. The deepfake campaign amplified this message to a degree that artificial means could not have achieved organically. As Schneider puts it, "That Cheng's supporters have taken this small threshold and run with it as a mandate for change demonstrates the impact that a psyop can have, even if it only moves the result by a fraction of a percent."

The New Frontier of Digital Manipulation

Beyond video, the interference extended to betting markets and meme culture. Schneider describes a gambling website that appeared just before the election, featuring a slick, AI-generated interface and promoting bets on the KMT chairmanship. The site's existence raises a profound question: "Why would a mainland Chinese prediction market startup pick a Taiwanese political party's internal election as one of its first actual bets to run?" The author suggests the site was designed to create a false sense of momentum, allowing operators to claim, "I have momentum. Here's proof. Someone's betting on me. My price is going up."

Furthermore, the article highlights how generative AI is lowering the barrier to entry for creating viral political content. A remix of a legislator singing "Good For Nothing" went viral, with the account behind it switching to AI-generated versions shortly after the release of Sora 2. "LLMs now let you make dopamine-inducing political content like never before," Schneider writes. The danger lies in the ambiguity: "It's easy to provide probabilistic evidence but impossible to provide conclusive proof." This lack of definitive proof allows the operators to operate with impunity, knowing that even if exposed, the damage is already done.

"More than half of the vote – that's a key number. It's not a close call; it's overwhelming support. It represents the will of the grassroots."

Bottom Line

Schneider's most compelling argument is that the era of high-stakes, high-quality disinformation is over; we have entered the age of the "slop" flood, where volume and speed matter more than truth or production value. The piece's greatest strength is its detailed forensic breakdown of how these campaigns operate, moving beyond abstract fears to concrete examples of algorithmic manipulation. Its vulnerability lies in the inherent difficulty of proving causality in a complex political environment, but the circumstantial evidence is overwhelming. Readers should watch for how these same tactics are deployed in upcoming elections globally, as the playbook for AI-driven interference is now fully written and open-source.

Deep Dives

Explore these related deep dives:

  • Kuomintang

    The article centers on the KMT chairmanship election but assumes reader familiarity with the party's complex history, its role in Chinese civil war, retreat to Taiwan, and evolution from authoritarian ruler to opposition party struggling with cross-strait identity politics

  • Deepfake

    The article discusses AI-generated deepfake videos as a key disinformation tool but readers would benefit from understanding the underlying technology, its history, detection methods, and broader implications for democracy beyond this specific Taiwan case

  • Taiwan independence movement

    The article references the shifting Taiwanese identity conception and the KMT's struggle with being 'the party that regards Taiwan as Chinese' - understanding the historical context of Taiwan independence sentiment provides crucial background for the political dynamics described

Sources

AI slopaganda in the kmt election

by Jordan Schneider · ChinaTalk · Read full article

Mandarin Peel is an International Relations graduate student based in Taiwan. His work focuses on U.S.–China tech competition, Indo-Pacific geopolitics, and Taiwan. You can find more of his writing on X and on Substack, where he also publishes work from fellow researchers.

The October 2025 KMT chairmanship election came at a time of reckoning for the party. The KMT faces a difficult challenge — being the party that regards Taiwan as Chinese — trying to get elected by a public whose own identity conceptions trend the opposite way.

At first, 73-year-old Hau Lung-pin 郝龍斌, a central figure of the deep blue wing of the KMT’s old guard, was the favored candidate for the position, having built a strong network within the party over his long career. Though he has historically leaned pro-China even for the KMT, he was running to help the KMT win elections. Toward that end, his vision for the party’s new core message would step more in line with the current trends in Taiwanese identity conception: “Pro-America, not kneeling to America; Peaceful with China, not sucking up to the CCP” “親美不跪美,和中不添共.”

His main competition, and the eventual victor, was Cheng Li-wun 鄭麗文, a 55-year-old candidate with fewer connections but not without charm or vision. A brash and charismatic campaigner, she instead sought to use growing mistrust of America, brewing since the beginning of Trump’s second term, and convince voters to be unapologetically pro-China and Chinese while refusing to be a “piece in the chess game of two great powers” — a favorite metaphor of Taiwanese America-skeptics (疑美論). “This is my promise, and it’s not just elected as party chair: in the future, all Taiwanese will proudly and confidently say ‘I am Chinese.’ This is what the KMT needs to do!”

Perhaps as controversial as Cheng’s remarks was a deepfake video that emerged of Hau Lung-pin and city councilmember Liu Caiwei 柳采葳 kissing at a press conference. Hau said that posts like this were coming from “overseas” accounts seeking to influence the election. His close ally Jaw Shaw-kong 趙少康 even directly accused China of election interference. This marked a turning point as officials of the party that had previously disputed such claims are now making them too. Hau cited a National Security Bureau report that found over 1,000 videos about the election on Chinese TikTok over 20 YouTube accounts — at least half of which were posting from ...