← Back to Library

Ken liu on AI and freedom

This piece stands out not for predicting a robot uprising, but for dismantling the very anxiety that fuels it. Jordan Schneider, hosting the conversation with author Ken Liu, guides a discussion that reframes artificial intelligence not as an alien invader, but as the latest, most potent iteration of a distinctly human trait: the externalization of our minds. While the public discourse fixates on whether machines will replace us, Liu argues the far more profound risk is that we are already training ourselves to behave like them.

The Myth of the Machine

The conversation opens by challenging the binary between "human" and "technological." Liu, whose work ranges from the Dandelion Dynasty to the recent techno-thriller All That We See or Seem, insists that technology is not an external force acting upon us. "Technology is the most human thing we do — humans have always externalized our minds into the world and then allowed those creations to reshape who we are," Liu asserts. This framing is crucial because it shifts the debate from defense to integration. We are not fighting a war against our tools; we are in a co-evolutionary dance with them.

Ken liu on AI and freedom

Liu draws a sharp distinction between the marketing buzzwords of the industry and the philosophical reality. He notes that "Homo sapiens had always externalized their minds into the world, oozing books, drawings, plans, recordings, the same way honeybees made their minds visible in the form of wax comb and sweet honey." The modern twist, he suggests, is simply the scale and speed of this process. In his novel, characters rely on "codedaemons, bug-genies, patchsprites, scriptpixies" to function, mirroring our current reliance on large language models for everything from coding to drafting emails.

"You cannot understand human nature without understanding human technology — it's literally a tangible substantiation of what is inside our minds."

This perspective effectively neutralizes the fear that AI is "unnatural." If our tools are just extensions of our cognition, then the AI is merely a new kind of mirror. However, a counterargument worth considering is that while a pen or a book is a passive tool, an AI model is an active agent that makes probabilistic decisions on our behalf. The shift from "tool" to "agent" introduces a level of autonomy that the "externalization" metaphor might gloss over, potentially obscuring the loss of human agency in critical decision-making loops.

Intelligence Without Consciousness

Perhaps the most provocative claim in the dialogue is the separation of intelligence from consciousness. Liu argues that we are witnessing a historical anomaly where "intelligence and consciousness are not the same thing." He dismisses the reductionist view that large language models are "just a very powerful autocomplete" as technically true but meaningless. "If something can write essays, pass the bar exam, and get a perfect score on the SATs, to say it's not intelligent is a nonsensical declaration," Liu states.

This distinction forces a re-evaluation of what we value in human interaction. We are accustomed to the assumption that high intelligence implies a "mind behind the intelligent acts," a will, and a subjectivity. Liu points out that we are now confronting a system that is undeniably intelligent yet entirely devoid of subjective experience. This aligns with the mythological structure of his work, where he uses terms like "jinn" to describe AI agents, drawing from the etymology of "cotton gin" to suggest that these entities are engines of desire and dream, not conscious beings.

The conversation touches on Roland Barthes' concept of the "death of the author," suggesting that large language models are the ultimate realization of this theory: "The large language model is a substantiation of that imagined dictionary of all writings. It's language coming to life." In this view, the AI is not a creator but a conduit, a "pluribus" or multi-mind channeling the entire corpus of human writing.

"The real issue is this — if something can write essays, pass the bar exam, and get a perfect score on the SATs, to say it's not intelligent is a nonsensical declaration. It's clearly intelligent, but it's not conscious."

This argument holds significant weight in demystifying AI, yet it risks underestimating the danger of "intelligence without will." A system that can mimic human reasoning without human empathy or moral compass could be more dangerous than one that is merely "dumb" but conscious. The lack of consciousness does not guarantee safety; it might, in fact, remove the only barrier to unchecked optimization.

The Age of Slop and the Human Spark

The discussion then pivots to the cultural impact of AI, specifically the fear of "slop" drowning out genuine art. Liu contextualizes this anxiety by reminding us that "we are already living in a world of slop — not AI-generated, but mass-produced slop." He invokes Walter Benjamin's "age of mechanical reproduction," noting that the invention of photography flooded the world with images, most of which were trivial, yet it did not destroy the value of human art.

Liu argues that the distinction in the future won't be between high quality and low quality, but between "desire-fulfilling machines and artists who draw from the collective unconscious." He suggests that science fiction is not prophecy but mythology, and that "ideologies are just mythology's cheaper, hack cousins." The enduring power of writers like Orwell or Le Guin lies not in their predictive accuracy, but in their ability to provide "metaphors powerful enough to think with across generations."

This is where the historical context of Liu's other works subtly strengthens the argument. Just as the Jane Whitefield series explores the tension between human instinct and technological surveillance, or how Clarice Starling navigates the darkness of the human psyche, Liu's fiction suggests that the human element remains the anchor. Even in a world of "AI slop," the audience will still crave the connection to a human who "lives and breathes and bleeds."

"AI 'slop' won't stop humans from making art that matters, and the real distinction isn't quality versus slop, but between desire-fulfilling machines and artists who draw from the collective unconscious."

Critics might argue that the economic pressure of AI-generated content could make human art a luxury good, inaccessible to the masses, effectively silencing the "collective unconscious" for everyone but the elite. Liu's optimism about the resilience of human artistry is compelling, but it may underestimate the sheer volume and distribution power of algorithmic content.

Bottom Line

Jordan Schneider and Ken Liu offer a necessary corrective to the panic surrounding artificial intelligence, grounding the debate in the long history of human technological evolution. The strongest part of their argument is the reframing of AI not as a replacement for humanity, but as a mirror that forces us to confront the separation of intelligence from consciousness. The biggest vulnerability in this optimistic view is the potential for these "desire-fulfilling machines" to reshape human behavior so thoroughly that the "human" element Liu cherishes becomes a relic, rather than a resilient constant. Readers should watch for how this philosophical distinction plays out in policy: if we accept that machines can be intelligent without being conscious, how do we regulate systems that can think but cannot feel?

Deep Dives

Explore these related deep dives:

Sources

Ken liu on AI and freedom

by Jordan Schneider · ChinaTalk · Read full article

Ken Liu graces ChinaTalk with his presence. He is the author of the Dandelion Dynasty silkpunk fantasy series and a brilliant short fiction writer — one of his stories was recently adapted into Sam Altman’s favorite show, Pantheon. We all know his translation work on the first and third volumes of the Three-Body Problem trilogy, but even better was his absolutely brilliant translation and commentary of the Dao De Jing. As much as I hoped that project would get him fully on the classical Chinese translation train, he followed it up with a very different direction — a techno-AI thriller, All That We See or Seem, released late last year. Irene Zhang of ChinaTalk joins us to co-host.

In a wide-ranging conversation, Ken Liu argues that:

Technology is the most human thing we do — humans have always externalized our minds into the world and then allowed those creations to reshape who we are.

AI “slop” won’t stop humans from making art that matters, and the real distinction isn’t quality versus slop, but between desire-fulfilling machines and artists who draw from the collective unconscious.

The deeper danger of AI isn’t machines replacing humans, but systems that train humans to behave like machines.

Science fiction isn’t prophecy, but mythology — and ideologies are just mythology’s cheaper, hack cousins. Orwell, Shelley, Tolkien, and Le Guin endure not because they predicted the future, but because they gave us metaphors powerful enough to think with across generations.

Large language models are intelligent, but can’t be wise. Drawing on Laozi and Zhuangzi, Ken explains why everything that truly matters lies beyond language.

Listen now on your favorite podcast app.

Technology as Human Expression.

Jordan Schneider: We’re living in the age of Claude Code, and I want to start with a passage you wrote. Why don’t you set it up and read this vision of future coding and writing?

Ken Liu: Let me start by saying what the book is actually about. All That We See or Seem is a techno-thriller in the sense that none of the technology mentioned is really speculative — it’s all either already here or very possible, just needing to be scaled up slightly.

Julia Z is a hacker, a hero in the mold of Clarice Starling or Jane Whitefield — someone with a very strong moral compass and a very dark past. She’s trying to escape that past, but events ...