Kenny Easwaran offers a startling reframing of our current AI anxiety: the fear that machines will become too human is not a new technological crisis, but a recurring cultural reflex dating back to the Bronze Age. By tracing the lineage from ancient automata to modern large language models, Easwaran argues that we are not witnessing a sudden rupture in history, but rather the latest chapter in a millennia-old story of human projection and misinterpretation. This historical lens is essential for busy readers trying to distinguish between genuine existential risk and the familiar noise of hype.
The Ancient Roots of Projection
Easwaran begins by dismantling the notion that artificial intelligence is a purely modern invention. He points out that "ever since the Bronze Age there are stories of people having built not just statues but statues that move themselves." The author highlights how ancient cultures, from Greece to China, crafted narratives of self-moving figures like the brass giant of Daedalus, which were likely "overenthusiastic ancient hype about artificial intelligence comparable to many things we've experienced in recent decades." This comparison is striking; it suggests that our current alarmism is less about the technology's actual capabilities and more about our enduring tendency to attribute life where there is none.
The commentary moves through the Middle Ages, where the focus shifted from mechanical marvels to magical constructs. Easwaran notes that while figures like Roger Bacon were rumored to have built a "Brazen Head" that could answer questions, and Jewish folklore spoke of a Golem defending Prague, these entities were "really more magical ideas than the ancient stories of automata." Crucially, Easwaran observes that "neither of these stories depict the being as a true general intelligence." The Brazen Head had knowledge but no agency; the Golem had agency but no voice. This distinction is vital for today's discourse, as it reminds us that having a specific function does not equate to possessing a soul or a mind.
"Although there are still important limitations in these artificial beings, people suspect there is more power of life in some possibilities than there really is."
Fiction as a Mirror for Reality
As the narrative shifts to the 19th century, Easwaran argues that fiction began to serve as a testing ground for our fears about science. He details how Mary Shelley's Frankenstein and E.T.A. Hoffmann's The Sandman explored the danger of humans falling in love with emotionless machines or mistreating created life. Easwaran writes, "Hoffman nor Shelly present their story as real... but they are trying to come to terms with the idea that progress in modern machinery and science might someday either produce something emotionless that a human might be fooled by and fall in love with or might produce something truly alive that human might misinterpret and mistreat." This framing effectively positions literature not as escapism, but as a form of philosophical inquiry into the ethics of creation.
The piece then pivots to the origin of the word "robot" itself, derived from Karel Čapek's 1921 play R.U.R. Easwaran explains that Čapek was "worried about these themes of Communism and fascism taking over" and used artificial workers to explore the consequences of oppressed labor. The author notes that in the play, the robots eventually "overthrow the humans and eventually causing their own death even though they were trying to bring about their freedom." This historical context adds a layer of political weight to the term, reminding us that the robot was born from fears of industrialization and class struggle, not just computer science. Critics might note that Čapek's robots were biochemical, not digital, but Easwaran's point stands: the archetype of the rebellious machine has always been a proxy for human societal anxieties.
From Specialized Tools to General Illusions
The commentary accelerates into the 20th and 21st centuries, where the abstract fears of fiction collided with the concrete reality of computing. Easwaran describes how Joseph Weizenbaum's 1960s program ELIZA, a simple chatbot mimicking a therapist, revealed a profound human vulnerability. "He ended up showing was that even though the therapy was simplistic enough to be implemented by a computer program of that era it was still tapping into something important and emotional for people," Easwaran writes. This is a critical insight: the danger isn't that the machine is intelligent, but that we are desperate to find intelligence in it.
As the narrative moves to the rise of Google and the modern tech giants, Easwaran observes that despite their massive data processing capabilities, these systems "never really felt to anyone like a general intelligence. It just felt to people like they had more and more information at their fingertips." The shift occurred with the introduction of voice assistants and, more significantly, the Transformer architecture in 2017. Easwaran notes that when OpenAI released GPT-3 and later GPT-4, "suddenly the public started paying attention." The author highlights a Microsoft research paper titled Sparks of Artificial General Intelligence, which claimed that these models might represent a new era. However, Easwaran frames this cautiously, suggesting that the "sparks" may be as much a result of our interpretation as the model's actual output.
"This suggested that in addition to the failure mode suggested by Frankenstein and RUR where artificial intelligence might have an inner life that isn't properly appreciated by humans, there's also a failure mode where humans project an inner life onto a device that doesn't really have it."
Bottom Line
Easwaran's most compelling argument is that our current panic over AGI is less about the technology's sudden arrival and more about a persistent human habit of projecting agency onto complex systems. The strongest part of this piece is its historical continuity, which effectively demystifies the "black box" of modern AI by showing it as the latest iteration of an ancient story. The biggest vulnerability, however, is the assumption that the underlying technology is just a new version of the same old tricks; the scale and speed of modern neural networks may indeed create emergent behaviors that ancient automata could never have imagined. Readers should watch for whether the "sparks" of general intelligence are real or just another reflection of our own desire to be understood.