This piece cuts through the noise of the 2025 Kuomintang chairmanship election to reveal a disturbing new reality: the weaponization of artificial intelligence not just to lie, but to flood the zone with low-effort, high-volume propaganda. Jordan Schneider argues that the true threat isn't the quality of deepfakes, but their sheer scale and the strategic patience of their creators, who build audiences on apolitical content before pivoting to election interference. For anyone tracking the future of Indo-Pacific stability, this analysis exposes how easily the digital public square can be hijacked by actors who face zero consequences for their actions.
The Strategy of Scale Over Quality
Schneider dismantles the assumption that sophisticated disinformation requires Hollywood-level production values. Instead, the campaign behind the victory of Cheng Li-wun relied on what the author terms "slopaganda"—mass-produced, low-fidelity content designed to game algorithms rather than convince critical thinkers. "False flag operations, in which an actor attempts to actually convince the public that a fake event is real, are a much smaller piece of the pie," Schneider writes. "The main function of AI generation for propaganda has not been to increase quality or to convince viewers that the videos are real, but to increase scale by speeding up the creation process."
This distinction is crucial. The operation didn't try to fool experts; it tried to drown out nuance. The author details how channels spent months posting about Taiwanese natural beauty or heartwarming stories of strangers helping foreigners, only to abruptly switch to political advocacy. One channel, named Firefly, spent three months building trust before deploying a deepfaked woman to praise Cheng Li-wun. "The accounts build up viewership and favor with the algorithm before eventually posting content about Taiwanese politics," Schneider notes. This tactic exploits the very mechanisms social media platforms use to recommend content, turning engagement metrics into a weapon.
Critics might argue that attributing the entire outcome to these videos ignores the genuine political appeal of Cheng Li-wun, who campaigned effectively and secured support from established figures like Ma Ying-jeou. However, Schneider counters this by highlighting the timing: the surge of AI-generated support began precisely when Cheng was considered an underdog against the favored Hau Lung-pin. "She was not popular in the KMT," analyst Jerry Yu is quoted as saying. "He claimed that if the videos were not making a 'big difference,' then Cheng Li-wun would not have won."
"Influence campaigns target social media users who keep eating the slop no matter what it's filled with — like pigs to the slaughter."
The Mechanics of Coordinated Inauthentic Behavior
The evidence of coordination is stark. Schneider points out that the uniformity of these channels—many using simplified Chinese characters and identical deepfaked avatars—suggests a centralized operation originating from the mainland. The timing was too precise to be organic; at least one account released its first deepfake of Cheng on the exact day she announced her candidacy. "At least one account with the same essential style as all of the rest released its first deepfake model Cheng Li-wun video on September 17th, the day that Cheng Li-wun announced her candidacy," Schneider observes. "Just one channel mentioned her before she announced her candidacy... The new mentions of Cheng reached their peak just the week after her announcement."
This level of synchronization transforms the election from a democratic contest into a test of digital infrastructure resilience. The author notes that even after the election, these channels didn't disappear; they simply reverted to posting apolitical content, waiting for the next cycle. "We have an army of accounts that were once Cheng Li-wun keyboard warriors and will almost definitely return to influence an election when needed." This creates a permanent, latent threat to Taiwan's democratic processes, one that can be activated with a few clicks.
The article also touches on the broader context of Taiwan's identity politics. The KMT, historically the party of "Chinese identity," faces a public that increasingly identifies as Taiwanese. Cheng Li-wun's campaign tried to bridge this by advocating for a stance of "not kneeling to America" while embracing Chinese heritage, a message that resonated with a specific demographic but alienated others. The deepfake campaign amplified this message to a degree that artificial means could not have achieved organically. As Schneider puts it, "That Cheng's supporters have taken this small threshold and run with it as a mandate for change demonstrates the impact that a psyop can have, even if it only moves the result by a fraction of a percent."
The New Frontier of Digital Manipulation
Beyond video, the interference extended to betting markets and meme culture. Schneider describes a gambling website that appeared just before the election, featuring a slick, AI-generated interface and promoting bets on the KMT chairmanship. The site's existence raises a profound question: "Why would a mainland Chinese prediction market startup pick a Taiwanese political party's internal election as one of its first actual bets to run?" The author suggests the site was designed to create a false sense of momentum, allowing operators to claim, "I have momentum. Here's proof. Someone's betting on me. My price is going up."
Furthermore, the article highlights how generative AI is lowering the barrier to entry for creating viral political content. A remix of a legislator singing "Good For Nothing" went viral, with the account behind it switching to AI-generated versions shortly after the release of Sora 2. "LLMs now let you make dopamine-inducing political content like never before," Schneider writes. The danger lies in the ambiguity: "It's easy to provide probabilistic evidence but impossible to provide conclusive proof." This lack of definitive proof allows the operators to operate with impunity, knowing that even if exposed, the damage is already done.
"More than half of the vote – that's a key number. It's not a close call; it's overwhelming support. It represents the will of the grassroots."
Bottom Line
Schneider's most compelling argument is that the era of high-stakes, high-quality disinformation is over; we have entered the age of the "slop" flood, where volume and speed matter more than truth or production value. The piece's greatest strength is its detailed forensic breakdown of how these campaigns operate, moving beyond abstract fears to concrete examples of algorithmic manipulation. Its vulnerability lies in the inherent difficulty of proving causality in a complex political environment, but the circumstantial evidence is overwhelming. Readers should watch for how these same tactics are deployed in upcoming elections globally, as the playbook for AI-driven interference is now fully written and open-source.