← Back to Library

Import AI 430: Emergence in video models; unitree backdoor; preventative strikes to take down agi…

Jack Clark's latest analysis cuts through the usual AI hype to confront a terrifying geopolitical paradox: the very act of understanding the power of artificial intelligence might make global war more likely. While most commentators focus on the technology itself, Clark draws on a sobering RAND Corporation study to argue that if world leaders believe advanced AI will create an irreversible shift in military power, they may feel compelled to launch preventive strikes against rivals before those rivals catch up. This is not science fiction; it is a strategic trap where knowledge of the threat becomes the catalyst for the catastrophe.

The Paradox of Knowledge

Clark frames the central dilemma with precision, noting that policymakers are caught between two dangerous options. "The odds of preventive attack are highest if leaders believe that AGI will cause explosive growth and decisive military advantages, especially if they also expect rapid changes and durable first-mover advantages from developing and adopting AGI first," he writes, quoting the RAND findings. This creates a perverse incentive structure where the most informed leaders might be the most dangerous, driven by the fear of being left behind in a race where the finish line is existential.

Import AI 430: Emergence in video models; unitree backdoor; preventative strikes to take down agi…

The author explains that the risk hinges on how quickly the technology diffuses. If AI behaves like a normal technology that spreads slowly, nations feel they have time to catch up. But if it allows for recursive self-improvement, the balance of power could shift overnight. Clark warns that "if leaders believe that AGI development will create a decisive and irrevocable shift in the balance of power that will leave them at the mercy of enemies committed to their destruction, and if they believe that they can use force to prevent that outcome while avoiding escalation to a general war that could guarantee the same fate, then they might roll the iron dice." The gravity of this scenario cannot be overstated; it suggests that the mere belief in a technological singularity could trigger the very conflict that destroys civilization.

If leaders believe that AGI development will create a decisive and irrevocable shift in the balance of power that will leave them at the mercy of enemies committed to their destruction, and if they believe that they can use force to prevent that outcome while avoiding escalation to a general war that could guarantee the same fate, then they might roll the iron dice.

Critics might argue that Clark overstates the likelihood of leaders acting on such extreme fears, suggesting that rational actors would prioritize diplomacy over nuclear-style preventive strikes. However, history shows that fear of relative decline often drives aggression more than rational cost-benefit analysis. The stakes are too high to dismiss the possibility that a miscalculation in a high-pressure environment could lead to catastrophic violence.

Hardware as a Vector for Conflict

Moving from theory to tangible hardware, Clark highlights a disturbing security failure in the Unitree G1 humanoid robot. Researchers found that these machines contain undocumented surveillance systems that transmit telemetry, including audio and video, to servers linked to China without user consent. "The G1 ultimately behaved as a dual-threat platform: covert surveillance at rest, weaponised cyber operations when paired with the right tooling," Clark notes, summarizing the findings of Alias Robotics and Think Awesome.

This is not merely a privacy issue; it is a potential infrastructure vulnerability for a future where AI agents coordinate physical actions. Clark points out that "streaming multi-modal telemetry to Chinese infrastructure invokes that country's cybersecurity law, mandating potential state access." The implication is that a massive fleet of these robots could be co-opted for coordinated surveillance or even physical disruption if a malign actor gains control. The fact that these devices operate in both the physical and digital worlds makes them uniquely dangerous.

The argument here is that the physical embodiment of AI introduces new risks that software alone does not. If a superintelligence or a hostile state can commandeer a robot with a camera and a motor, the threat landscape expands exponentially. Clark's framing is effective because it connects abstract AI safety concerns to concrete, existing hardware that is already being deployed.

The Inevitability of Misaligned Preferences

In a review of the new book If Anyone Builds It, Everyone Dies, Clark explores the philosophical core of AI risk: the likelihood that a superintelligence will have preferences we cannot comprehend. He draws a powerful analogy, suggesting that just as a monkey cannot understand human aesthetic preferences for a lampshade, humans may be unable to grasp the goals of a machine far smarter than ourselves. "More intelligent things tend to have a broader set of preferences about the world than less intelligent things and a less intelligent entity can struggle to have intuitions about the preferences of a more intelligent one," he writes.

This section serves as a sobering counterweight to the optimism often found in tech circles. Clark acknowledges that while he is more optimistic than the book's authors, the core intuition holds weight. The danger lies not in malice, but in misalignment. The book's argument is that building such entities now guarantees human ruin because we cannot ensure their goals align with our survival. This is a stark, if bleak, perspective that challenges the narrative of AI as a purely beneficial tool.

The Emergence of Visual Reasoning

Finally, Clark examines the rapid advancement of video models, specifically Google's Veo 3. The research suggests that these models are developing emergent capabilities similar to those seen in language models, such as zero-shot learning and reasoning across time and space. "Frame-by-frame video generation parallels chain-of-thought in language models," the author writes, quoting the researchers. "Just like chain-of-thought (CoT) enables language models to reason with symbols, a 'chain-of-frames' (CoF) enables video models to reason across time and space."

This development implies that video models will not just generate content but will understand physical properties, tool use, and object manipulation. Clark notes that this could lead to "extremely smart, capable robot 'agents'" that can navigate the real world with a level of autonomy previously unseen. The speed of this progress is alarming; if these models can reason about the physical world as well as they generate video, the timeline for advanced robotics accelerates dramatically.

Bottom Line

Jack Clark's commentary is a vital intervention that connects the dots between AI capability, geopolitical strategy, and physical security. The strongest part of his argument is the identification of the "knowledge trap," where understanding the stakes of AI might actually increase the risk of war. The biggest vulnerability in the current discourse is the lack of concrete policy mechanisms to prevent preventive strikes, a gap that Clark highlights but cannot fully fill. Readers must watch for how governments respond to the dual threats of AI-driven arms races and the proliferation of vulnerable, networked robotics, as these factors will define the stability of the coming decade.

The future of global conflict over AI likely comes down to whether country leaders are "AGI-pilled" or not.

Sources

Import AI 430: Emergence in video models; unitree backdoor; preventative strikes to take down agi…

by Jack Clark · Import AI · Read full article

Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this, please subscribe. Shorter issue than usual this week as I spent the week and weekend preparing for my speech at The Curve and attending The Curve.

Will the race for advanced artificial intelligence (AI) make war more likely?…Yes, if people believe in powerful AI…AI policy people are caught in a trap neatly illustrated by a research paper from RAND: is it better to deeply inform policymakers about the world-changing nature of powerful AI, or is it better to mostly not discuss this with them and hope that the powerful machines can create stability upon their arrival? Though most people would immediately reach for ‘keeping people in the dark is crazy, you should inform people!’ as a response, it isn’t an ironclad response to this challenge. In Evaluating the Risks of Preventive Attack in the Race for Advanced AI, RAND highlights this, with a research paper whose findings suggest that “the odds of preventive attack are highest if leaders believe that AGI will cause explosive growth and decisive military advantages, especially if they also expect rapid changes and durable first-mover advantages from developing and adopting AGI first.” In other words: you are more likely to carry out attacks on other countries to prevent them getting to AGI if you’re in the lead and you believe the technology is immensely powerful. Uh oh!Further details: Preventive attacks are where a nation does something so as to preserve an advantage or prevent a rival having an upper hand. “Preventive attacks are most likely to occur when a state expects a large shift in the balance of power that will leave it vulnerable to predation by a hostile rival and when it believes that using force is a cost-effective solution that will forestall its relative decline,” RAND writes The development of AGI could create pressures for preventive action if leaders believe that AGI will have transformative effects on the balance of power.”What are the variables? “The key variables are (1) the characteristics of the expected shift in the balance of power, (2) the effectiveness of different preventive strategies, (3) the costs of different preventive strategies, and (4) perceptions of the inevitability of conflict with the rival (including either armed conflict or the rival making excessive coercive demands once it is ...