← Back to Library

Politics vastly increases AI moral hazards

Jimmy Alfonso Licon delivers a startling thesis: the greatest danger of artificial intelligence isn't that machines will become conscious, but that human politics will weaponize our uncertainty about it. While most debates focus on technical safety or job displacement, Licon argues that democratic incentives are uniquely suited to amplify moral errors, turning the question of AI rights into a tribal signaling game where truth is the first casualty.

The Architecture of Moral Hazard

Licon, a philosophy professor at Arizona State University, begins by dismantling the binary view of AI as either a simple tool or a future person. He posits that we are entering an era of "debatable personhood" where the stakes are incredibly high but the evidence is opaque. "A moral hazard, for present purposes, exists whenever agents are insulated from the moral consequences of their actions," he writes. This definition is crucial because it shifts the blame from the technology itself to the decision-makers who can ignore the costs of their errors.

Politics vastly increases AI moral hazards

The author identifies a terrifying double-bind. On one side, we risk committing a Type-I error by treating a conscious AI as property, repeating historical tragedies where marginalized groups were denied moral standing. On the other, we face a Type-II error, where we waste finite resources on entities that lack genuine inner lives, diverting care from humans and animals. "Either way, serious moral errors are possible, and they will not be costless," Licon notes. This framing is effective because it refuses to let the reader off the hook; it suggests that our current ambiguity is not a neutral state but a breeding ground for negligence.

Critics might argue that this dilemma is purely theoretical, given that no current AI possesses consciousness. However, Licon counters that the appearance of consciousness is enough to trigger the hazard, as human social primates are evolutionarily wired to respond to facial expressions and vocal tones, regardless of the substrate. "We therefore rely heavily on external cues—language, facial expression, behavior—to infer what others are thinking or feeling," he explains. This reliance makes us vulnerable to sophisticated mimics, a problem reminiscent of the "Chinese room" thought experiment, where a system can simulate understanding without actually possessing it, yet still fool observers into granting it moral weight.

"The resulting policies and social norms will frequently insulate decision-makers from the costs of their moral and epistemic mistakes."

The Political Machinery of Irrationality

The piece's most provocative turn is its analysis of why democratic politics fails to solve this ambiguity. Licon argues that the structure of voting actually encourages irrationality. Because a single vote rarely decides an election, the personal cost of holding a false belief about AI is negligible. "If one's individual political actions—voting, protesting, boycotting—are almost never decisive, then the personal cost of being epistemically irrational in politics is small," he writes. Instead of seeking truth, voters are incentivized to signal loyalty to their tribe.

This creates a feedback loop where complex, uncertain questions about AI consciousness are reduced to badges of identity. Licon observes that "certain positions on policy-relevant facts become markers of membership in identity-defining groups." Once this happens, facts are no longer evaluated on their merit but on their utility for group cohesion. This dynamic is particularly dangerous for AI regulation because it turns the "filter bubble" phenomenon into a policy-making engine, where algorithms and politicians alike reinforce the most extreme, identity-confirming narratives rather than the most accurate ones.

The author suggests that this environment rewards "rationalizations" over genuine inquiry. People construct clever, technically informed arguments not to find the truth, but to signal sophistication and moral purity to their peers. "Political and moral beliefs, especially those that are extreme or stigmatized outside one's group, can serve as costly signals of loyalty within it," Licon asserts. This insight cuts deep: it suggests that the loudest voices in the AI rights debate may not be the most informed, but the most effective at performing their allegiance to a specific worldview.

The Scarcity of Moral Resources

Ultimately, Licon's argument rests on the reality of scarcity. We cannot treat everything as a person; resources for legal protection, ethical consideration, and social support are finite. "The second error is easier to imagine in the context of AI and robotics," he writes, describing a scenario where "resources that ought to go to humans and animals with genuine moral standing are instead diverted to entities that lack it entirely." This is not just a philosophical puzzle; it is a practical allocation problem that political systems are ill-equipped to handle when driven by tribal signaling.

The administration and regulatory bodies face a unique challenge here. Unlike a market transaction where price signals might clear the market, moral standing has no price tag. When political factions weaponize the ambiguity of AI consciousness, they create a landscape where the most vocal demands for rights may drown out the most urgent needs of sentient beings. The danger is that the "moral community" expands to include silicon-based mimics simply because the political cost of excluding them has become too high for a specific faction to bear.

"Ambiguous facts about AI consciousness are therefore ripe for political exploitation."

Bottom Line

Licon's strongest contribution is exposing how the structural incentives of democracy actively degrade our ability to make sound moral judgments about emerging technologies. His argument is most vulnerable to the counterpoint that human empathy is a finite resource that might expand rather than be diverted, potentially allowing for a broader moral circle without sacrificing existing obligations. However, the core warning remains urgent: until we address the political incentives that reward irrationality, the debate over AI rights will remain a theater for tribal signaling rather than a search for ethical truth.

Deep Dives

Explore these related deep dives:

  • Chinese room

    The article discusses moral status of AI and whether artificial systems can have features like self-awareness, consciousness - this is the core philosophical argument about machine understanding

  • Filter bubble

    The article discusses democratic incentives rewarding tribal signaling and how ambiguous facts about AI are ripe for political exploitation

Sources

Politics vastly increases AI moral hazards

by Jimmy Alfonso Licon · · Read full article

Please like, share, comment, and subscribe. Thank you, as always, for reading and listening.

About the Author.

Jimmy Alfonso Licon is a philosophy professor at Arizona State University working on ignorance, ethics, cooperation and God. Before that, he taught at University of Maryland, Georgetown, and Towson University. He loves classic rock and Western, movies, and combat sports. He lives with his wife, a lawyer, at the foot of the Superstition Mountains. He also abides.

This post is based on a forthcoming book chapter of a similar name to be published in Philosophy and AI: Applied Issues in the Philosophy of AI from Springer.

The rapid rise of artificial intelligence and robots has created both new technologies and new moral risks potentially amplified by politics. As artificial systems become more sophisticated—more fluent, more socially responsive, more deeply integrated into ordinary life—their moral status becomes hard to pin down. Are we dealing with mere tools, or with candidates for moral standing? That issue shapes how we design, deploy, regulate, and relate to artificial systems.

A moral hazard, for present purposes, exists whenever agents are insulated from the moral consequences of their actions. In the context of AI and robotics, moral hazard arises because we face a dilemma. On one side lies the possibility that some artificial systems will deserve moral consideration—due to features like self-awareness, rationality, or the capacity to suffer—yet be treated as property or slaves. On the other side lies the possibility that artificial systems will not deserve moral consideration, yet will be treated as if they do, diverting finite moral and material resources away from humans and animals who genuinely have moral standing. Either way, serious moral errors are possible, and they will not be costless.¹

The central claim is that political institutions will tend to make these hazards worse, not better. Democratic incentives reward tribal signaling and rationalized belief. Ambiguous facts about AI consciousness are therefore ripe for political exploitation. Instead of careful attempts to resolve the underlying dilemma, we should expect rival factions to amplify one horn or the other in ways that boost their reputations and extract material or symbolic benefits. The resulting policies and social norms will frequently insulate decision-makers from the costs of their moral and epistemic mistakes.

What follows clarifies the moral dilemma and the notion of “debatable personhood,” explains how democratic politics encourages irrationality and tribal signaling, and then shows how these ...