← Back to Library

Autonomous Weapons 101 + Dario v Hegseth

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

  • Phalanx CIWS 18 min read

    The CIWS (Phalanx) system mentioned is a concrete example of autonomous weapons deployed on US ships since the 1980s that automatically target incoming threats

  • Department of Defense Directive 3000.09 1 min read

    This DoD directive is the Pentagon's framework governing autonomous weapon systems, which Michael Horowitz helped rewrite and is directly referenced in the article

  • Loitering munition 1 min read

    The article discusses semi-autonomous fire-and-forget munitions like radar-guided missiles; loitering munitions represent a similar category of weapons with advanced autonomy used in Ukraine and Iran

The Anthropic–Pentagon blowup generated enormous heat and almost no light.

Michael Horowitz has thought as much about autonomous weapons policy as anyone. He’s a professor at Penn who spent time in Biden’s DoD overseeing the office that rewrote DoD Directive 3000.09, the Pentagon's overarching framework for autonomous weapons. He joined me to do a proper 101: what autonomous weapons actually are, how the relevant law works, what Ukraine has taught us, and where the genuine risks lie — which turns out to be less about killer drones and more about generals over-trusting their dashboards.

Listen now on your favorite podcast app.

What Does an “Autonomous Weapon” Actually Mean?

Jordan Schneider: How would you characterize where the fear lies in the well-meaning researcher or head of an AI lab who thinks their technology used for certain types of autonomy would be a bad direction to go? Maybe contrast that with how this stuff is used today in Ukraine and Iran.

Michael Horowitz: The average Silicon Valley AI safety researcher who’s worried about autonomous war bots is probably worried about AI making the decision about who lives and who dies. They think that’s some dystopia they don’t want any part of.

They get worried about the incorporation of AI into the pointy end of the spear for militaries, especially when it comes to potentially selecting and engaging targets. What sometimes gets lost in the conversation is the substantial degree of autonomy that already exists in modern weapon systems.

The US military and basically 40 militaries around the world have deployed autonomous weapons systems since the early 1980s. These are often automated systems using essentially deterministic, good old-fashioned AI. They’re on ships — like these enormous Gatling guns called the Phalanx — that can operate by algorithm. If there are too many threats coming in, say too many missiles about to hit a ship, an operator can basically flip on the algorithm, which can automatically target and hit those incoming threats.

You also have semi-autonomous weapon systems that fall into the category of fire-and-forget munitions. Think about how a radar-guided missile works. A pilot believes there’s an adversary radar that’s a legitimate target. They press the launch button, the radar-guided missile fires. After going a certain distance, it turns on a seeker, detects a radar, goes in and destroys the radar. There’s no human supervision or control of any kind after that ...

Continue reading on ChinaTalk →

The full article by Jordan Schneider is available on ChinaTalk.