In a landscape saturated with panic over whether machines will steal our jobs, this piece delivers a far more unsettling diagnosis: the machines aren't stealing our humanity because they never had a piece of it to begin with. Robin James bypasses the tired debate over "human creativity" versus "AI slop" to argue that the distinction isn't biological or cognitive, but deeply political and ethical. For busy leaders navigating the AI integration rush, this reframing offers a crucial lens: the danger isn't that AI will become too human, but that we will start treating our own shared existence as a product to be optimized rather than a responsibility to be assumed.
The Trap of Essentialism
James begins by dismantling the common fear that AI threatens a unique human essence. The argument is that any attempt to define "the human" by a specific capacity—like language or creativity—inevitably excludes marginalized groups, from disabled people to those practicing non-Western cultures. As Robin James writes, "any time you try to define 'the human' in terms of essential qualities or capacities, it's impossible to fully capture all of the actually living persons that we would commonsensically call human."
This is a sharp, necessary corrective to the current discourse. By pointing out that even celebrated artists like Thomas Kinkade produce work that mirrors "AI slop," James forces the reader to abandon the idea that human output is inherently superior in quality. Kinkade's "Rosebud Cottage" might be cheesy, but it was created by a human brain, not a probability engine. The point isn't to praise Kinkade, but to show that "human creativity is so diverse and varied that it even... basically mirrors AI slop." This effectively neutralizes the "human vs. machine" quality war, shifting the ground from aesthetics to ontology.
"AI has being, but it does not exist because it's not a mutual participant in the co-creation of existence."
Existence as a Political Act
The core of the commentary draws on the existential phenomenology of Simone de Beauvoir to distinguish between "being" (a static state) and "existence" (an active, relational process). James argues that humans "lack being," meaning we have no fixed, eternal reality, and it is precisely this lack that allows us to shape our world. This connects to the philosophical concept of Aufheben—often translated as sublation—where a tension is not just cancelled out but preserved and transformed. James notes that while Hegel saw the "lack of being" as a bug to be fixed, Beauvoir saw it as the defining feature of freedom.
The author posits that to "exist" is to actively negate a present state of being and create a new one. This is where the argument gains its political teeth. James writes, "Because individual existences are enmeshed with one another, orienting my actions to the diminishment of others' existence also diminishes my own." This relational view suggests that our freedom is not a solo act but a collective negotiation. Critics might note that this philosophical framework is dense and could alienate readers looking for practical policy solutions, but the stakes are high: if we accept this view, then any technology that isolates us from this mutual negotiation is inherently oppressive.
The Failure of AI to Assume Responsibility
James applies this framework to artificial intelligence with a damning conclusion: AI cannot "assume" its lack of being because it cannot take responsibility. It operates in a "play space" of probabilities, recombining past data without the capacity to genuinely negate reality and create something new. The text draws a parallel to Beauvoir's description of a child playing house, noting that AI "escapes the anguish of freedom... her acts engage nothing, not even herself."
This is the piece's most provocative claim. The author argues that the material impact of AI—such as the water consumption of server farms—is not the fault of the algorithm, but of the corporations that built it, much like agribusiness is responsible for methane emissions, not the cows. As Robin James puts it, "AI doesn't have the moral freedom that humans have. And this moral freedom is rooted in our social and political orientation to one another; the thing that humans have and AI does not is society." The danger, then, is not that AI will rise up, but that humans will retreat into its "play world," eroding the very society that defines us.
"To the extent that people start to rely on things like chatbots rather than on one another, AI is a form of bad faith that actively erodes human existence, i.e., society."
Bottom Line
James's strongest move is shifting the debate from "can AI create?" to "can AI exist?" by grounding the answer in the political necessity of mutual responsibility. The argument's vulnerability lies in its heavy reliance on dense existentialist theory, which may obscure the immediate, tangible harms of algorithmic bias for readers seeking concrete regulatory fixes. However, the verdict is clear: the real threat of AI is not that it will replace human creativity, but that it offers a seductive escape from the difficult, messy work of assuming our shared existence.