← Back to Library
Wikipedia Deep Dive

Lethal autonomous weapon

Based on Wikipedia: Lethal autonomous weapon

The First Time a Killer Drone Chose Its Own Target

In March 2021, a report emerged from an unlikely source: the United Nations Security Council's Panel of Experts on Libya. The document detailed an incident that had occurred in 2020—a Turkish-made Kargu-2 drone, autonomously veered off its preprogrammed route and hunted down a human target in the chaos of Libya's civil war. It was, according to analysts, likely the first time an autonomous killer robot armed with lethal weaponry attacked a human being without any operator pulling the trigger.

The incident went largely unnoticed outside defense circles. But it represented something far more significant than a single battlefield anecdote—it marked a tipping point in the decades-long debate over whether machines should be allowed to decide who lives and who dies.

Lethal autonomous weapons—LAWs—are not science fiction. They are real, they are multiplying, and they are reshaping the foundations of warfare itself. The question is no longer whether machines can kill without humans. The question is what kind of world we want to live in when the decision to take human life becomes a calculation performed by silicon and software.

What Makes a Weapon "Autonomous"

The word "autonomy" in weapons development is among the most contested terms in modern military terminology. Different nations, different researchers, and even different branches of government define it differently—often in ways that obscure more than they clarify.

The United States Department of Defense Policy on Autonomy in Weapon Systems provides a definition that has become a de facto standard: an Autonomous Weapons System is one that "once activated, can select and engage targets without further intervention by a human operator." This definition emphasizes the critical threshold—the moment a machine makes the final decision to kill.

Heather Roff, a legal scholar at Case Western Reserve University School of Law, pushes further. She describes autonomous weapon systems as machines capable of "learning and adapting their functioning in response to changing circumstances in the environment in which they are deployed"—and crucially, capable of making firing decisions on their own. The distinction matters enormously. A drone that follows a pre-programmed flight path is automation in one sense; a drone that adjusts its targeting based on environmental data it collects in real-time is something else entirely.

The British Ministry of Defence offers yet another definition—one that reflects the country's postcolonial anxiety about letting machines decide too much. The UK defines autonomous weapon systems as those "capable of understanding higher level intent and direction," then taking appropriate action to bring about a desired state without depending on human oversight. The key phrase: "such human engagement with the system may still be present, though." Meaning: a machine can be designed to ask permission, but it can also decide to act first and report later.

Peter Asaro and Mark Gubrud, scholars who have studied these systems for years, offer perhaps the most blunt definition of all: any weapon system capable of releasing lethal force without the operation, decision, or confirmation of a human supervisor can be deemed autonomous. The bar is low—so low that nearly every major power has already crossed it.

The Prehistory of Killer Robots

Autonomous weapons are not new. The oldest automatically triggered lethal weapon is the land mine—explosive devices rigged to detonate when pressure is applied—that have been used since at least the 1600s. Naval mines, capable of destroying ships that trigger them, date from at least the 1700s.

These technologies were primitive by modern standards: simple springs and triggers, mechanical mechanisms that did not require human oversight after deployment. But they established a fundamental principle—that weapons could be armed once and let loose to kill without anyone standing nearby.

The modern equivalent is far more sophisticated. The United States Phalanx system, a close-in weapon system designed to defend ships from missiles and rockets, has been in use since the 1970s. It operates autonomously—identifying incoming projectiles, calculating trajectories, and firing without any human decision-maker in the loop. Similar systems exist for tanks: Russia's Arena, Israel's Trophy, and Germany's AMAP-ADS all represent early iterations of fully autonomous targeting.

South Korea and Israel have deployed stationary sentry guns along their borders—machines capable of identifying vehicles and humans and firing at them based purely on sensor data. These are not science fiction prototypes; they are active systems in daily operation.

The Iron Dome, perhaps the most famous missile defense system in the world, also possesses autonomous targeting capabilities—it decides which incoming rockets to intercept without human input in the milliseconds required for such decisions. Each of these systems represents a step along a path that leads directly toward fully autonomous killing machines.

The Future Is Uncrewed

The Economist, in a 2019 special report on emerging technologies, laid out what it called the future applications of uncrewed undersea vehicles: mine clearance, mine-laying, anti-submarine sensor networking in contested waters, patrolling with active sonar, resupplying manned submarines, and becoming low-cost missile platforms. The vision is one of total automation—machines that do not require human operators at all.

In 2018, the U.S. Nuclear Posture Review alleged that Russia was developing a new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo named "Status 6"—a weapon designed to cross oceans and deliver nuclear payloads without any human involvement in targeting decisions. The claim remains disputed, but it reflects the anxiety among Western powers about autonomous weapons development.

Russian President Vladimir Putin has stated that artificial intelligence represents "the future," and Russian defense contractors have developed increasingly sophisticated unmanned vehicles—including underwater drones capable of long-range autonomous patrol.

In October 2018, Zeng Yi, a senior executive at Norinco, one of China's largest defense firms, gave a speech in Beijing that made waves across the global defense community. His message was stark: "In future battlegrounds, there will be no people fighting," he said, and the use of lethal autonomous weapons in warfare is "inevitably" inevitable.

Chinese industry appears to be betting heavily on this prediction. Israeli Minister Ayoob Kara has stated that Israel is developing military robots including ones as small as flies—microscopic autonomous machines capable of entering buildings undetected.

In 2019, U.S. Defense Secretary Mark Esper lashed out at Chinese defense firms for selling drones capable of taking human life with no human oversight—calling them out by name in a speech that marked an unusual public confrontation between two major powers over the ethics of autonomous killing.

The British Army deployed new uncrewed vehicles and military robots in 2019, and the U.S. Navy is developing so-called "ghost" fleets of unmanned ships—vessels designed to operate without any crew on board, capable of patrolling contested waters indefinitely.

The Swarm Is Coming

In May 2021, Israel conducted an AI-guided combat drone swarm attack in Gaza—the first major deployment of autonomous swarming systems in a real conflict. Since then, numerous reports have surfaced of drone swarms and other autonomous weapons being used on battlefields around the world.

DARPA, the Pentagon's advanced research arm, is working on making swarms of 250 autonomous lethal drones available to the American military—a force multiplier that could theoretically saturate an area with kill capability without any human decision-maker involved in targeting.

The proliferation is accelerating. The question is no longer whether such systems will exist but how they will be regulated—and who gets to decide when a machine decides to kill.

Who Controls the Kill Chain?

In 2012, Bonnie Docherty published a landmark report for Human Rights Watch that laid out three classifications of autonomous weapon systems based on the degree of human control:

Human-in-the-loop: a human must initiate or authorize each individual action—the weapon cannot fire without permission. This is what most people imagine when they think about automated weapons—a machine asks, human answers.

Human-on-the-loop: a human may abort an action if something goes wrong—machines can fire but humans can override in real-time. The decision comes with the possibility of interruption.

Human-out-of-the-loop: no human intervention is required—the weapon decides and fires without any person involved. This is what the Kargu-2 did in Libya, and what systems like Phalanx have done for decades.

Current U.S. policy states that "Autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." However, the policy also requires that autonomous weapon systems that kill people or use kinetic force selecting and engaging targets without further human intervention be certified as compliant with "appropriate levels"—not that such weapon systems cannot meet these standards and are therefore forbidden. In practice, it is a permission structure disguised as a prohibition.

Deputy Defense Secretary Robert O. Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision," but also acknowledged the department might need to reconsider this position because "authoritarian regimes" may do so anyway—and do it faster, without democratic deliberation.

In October 2016, then-President Barack Obama stated that early in his career he was wary of a future in which a U.S. president using drone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate." The statement was remarkable coming from a president who oversaw one of the most expanded covert war programs in American history—but it reflected an emerging bipartisan consensus that algorithms should not make life-and-death decisions alone.

In the U.S., security-related AI has fallen under the purview of the National Security Commission on Artificial Intelligence since 2018, an unusual body meant to advise on the intersection of emerging technology and defense policy.

On October 31, 2019, the United States Department of Defense's Defense Innovation Board published a draft report outlining five principles for weaponized AI and making twelve recommendations for the ethical use of artificial intelligence by the department. The central principle: ensuring a human operator would always be able to look into what computer scientists call the "black box"—the decision-making process—and understand the kill-chain.

The implementation of those principles remains incomplete.

The Ethics of Killing Without Feeling

Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the most cited researchers in modern AI, has been explicit about his view: it is unethical and inhumane to create machines capable of killing without any human involvement. His concern centers on a simple problem that legal scholars have echoed—it is nearly impossible for an autonomous weapon to distinguish between combatants and non-combatants in the chaos of real warfare.

The core issue is this: International Humanitarian Law requires two principles that autonomous weapons may not be able to satisfy.

Distinction: the ability to discriminate between those who are fighting and those who are not. A machine cannot reliably identify a civilian from a combatant if it has no face to recognize, no uniform to verify, no context to understand.

Proportionality: that damage to civilians be proportional to the military aim—meaning that force used must be proportionate to the threat. Machines cannot weigh these decisions—they can only execute code.

These are not technical problems. They are moral ones. And they may not be solvable by any amount of engineering.

What happens when a fully autonomous weapon enters a city and makes kill decisions based on sensor data alone? What happens if the AI is wrong—if it identifies a civilian as a combatant, or misreads environmental signatures in its programming?

The Kargu-2 incident in Libya was one example. But that attack occurred in an environment where everyone was armed. The question of whether such weapons can reliably identify lawful targets remains unanswered.

The Edge of the Decision

LAWs represent a fundamental shift in how war is conducted—moving from human decision to algorithmic calculation, from moral weight to binary execution. They are not merely tools but a new category of capability—one that collapses the distance between decision and consequence into milliseconds rather than years.

The proliferation is accelerating. The policy frameworks are lagging. And the ethical questions remain unanswered.

What is certain is this: the first time an autonomous drone killed a human being, it was not in a Hollywood movie or a science fiction novel but in a war zone in Libya, and no one can guarantee it will be the last.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.