← Back to Library

Politics vastly increases AI moral hazards

Deep Dives

Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:

  • Chinese room 17 min read

    The article discusses moral status of AI and whether artificial systems can have features like self-awareness, consciousness - this is the core philosophical argument about machine understanding

  • Filter bubble 14 min read

    The article discusses democratic incentives rewarding tribal signaling and how ambiguous facts about AI are ripe for political exploitation

  • Legal person 17 min read

    The article discusses debates about AI rights, synthetic relationships, and legal standing of artificial agents - this concept covers the legal framework for granting rights to non-human entities

Please like, share, comment, and subscribe. Thank you, as always, for reading and listening.



About the Author

Jimmy Alfonso Licon is a philosophy professor at Arizona State University working on ignorance, ethics, cooperation and God. Before that, he taught at University of Maryland, Georgetown, and Towson University. He loves classic rock and Western, movies, and combat sports. He lives with his wife, a lawyer, at the foot of the Superstition Mountains. He also abides.


This post is based on a forthcoming book chapter of a similar name to be published in Philosophy and AI: Applied Issues in the Philosophy of AI from Springer.


The rapid rise of artificial intelligence and robots has created both new technologies and new moral risks potentially amplified by politics. As artificial systems become more sophisticated—more fluent, more socially responsive, more deeply integrated into ordinary life—their moral status becomes hard to pin down. Are we dealing with mere tools, or with candidates for moral standing? That issue shapes how we design, deploy, regulate, and relate to artificial systems.

A moral hazard, for present purposes, exists whenever agents are insulated from the moral consequences of their actions. In the context of AI and robotics, moral hazard arises because we face a dilemma. On one side lies the possibility that some artificial systems will deserve moral consideration—due to features like self-awareness, rationality, or the capacity to suffer—yet be treated as property or slaves. On the other side lies the possibility that artificial systems will not deserve moral consideration, yet will be treated as if they do, diverting finite moral and material resources away from humans and animals who genuinely have moral standing. Either way, serious moral errors are possible, and they will not be costless.¹

The central claim is that political institutions will tend to make these hazards worse, not better. Democratic incentives reward tribal signaling and rationalized belief. Ambiguous facts about AI consciousness are therefore ripe for political exploitation. Instead of careful attempts to resolve the underlying dilemma, we should expect rival factions to amplify one horn or the other in ways that boost their reputations and extract material or symbolic benefits. The resulting policies and social norms will frequently insulate decision-makers from the costs of their moral and epistemic mistakes.

What follows clarifies the moral dilemma and the notion of “debatable personhood,” explains how democratic politics encourages irrationality and ...

Continue reading on →

The full article by Jimmy Alfonso Licon is available on .