← Back to Library

AI literacy - lecture 8.2: Interpretations and misinterpretations of artificial general intelligence

Kenny Easwaran offers a startling reframing of our current AI anxiety: the fear that machines will become too human is not a new technological crisis, but a recurring cultural reflex dating back to the Bronze Age. By tracing the lineage from ancient automata to modern large language models, Easwaran argues that we are not witnessing a sudden rupture in history, but rather the latest chapter in a millennia-old story of human projection and misinterpretation. This historical lens is essential for busy readers trying to distinguish between genuine existential risk and the familiar noise of hype.

The Ancient Roots of Projection

Easwaran begins by dismantling the notion that artificial intelligence is a purely modern invention. He points out that "ever since the Bronze Age there are stories of people having built not just statues but statues that move themselves." The author highlights how ancient cultures, from Greece to China, crafted narratives of self-moving figures like the brass giant of Daedalus, which were likely "overenthusiastic ancient hype about artificial intelligence comparable to many things we've experienced in recent decades." This comparison is striking; it suggests that our current alarmism is less about the technology's actual capabilities and more about our enduring tendency to attribute life where there is none.

AI literacy - lecture 8.2: Interpretations and misinterpretations of artificial general intelligence

The commentary moves through the Middle Ages, where the focus shifted from mechanical marvels to magical constructs. Easwaran notes that while figures like Roger Bacon were rumored to have built a "Brazen Head" that could answer questions, and Jewish folklore spoke of a Golem defending Prague, these entities were "really more magical ideas than the ancient stories of automata." Crucially, Easwaran observes that "neither of these stories depict the being as a true general intelligence." The Brazen Head had knowledge but no agency; the Golem had agency but no voice. This distinction is vital for today's discourse, as it reminds us that having a specific function does not equate to possessing a soul or a mind.

"Although there are still important limitations in these artificial beings, people suspect there is more power of life in some possibilities than there really is."

Fiction as a Mirror for Reality

As the narrative shifts to the 19th century, Easwaran argues that fiction began to serve as a testing ground for our fears about science. He details how Mary Shelley's Frankenstein and E.T.A. Hoffmann's The Sandman explored the danger of humans falling in love with emotionless machines or mistreating created life. Easwaran writes, "Hoffman nor Shelly present their story as real... but they are trying to come to terms with the idea that progress in modern machinery and science might someday either produce something emotionless that a human might be fooled by and fall in love with or might produce something truly alive that human might misinterpret and mistreat." This framing effectively positions literature not as escapism, but as a form of philosophical inquiry into the ethics of creation.

The piece then pivots to the origin of the word "robot" itself, derived from Karel Čapek's 1921 play R.U.R. Easwaran explains that Čapek was "worried about these themes of Communism and fascism taking over" and used artificial workers to explore the consequences of oppressed labor. The author notes that in the play, the robots eventually "overthrow the humans and eventually causing their own death even though they were trying to bring about their freedom." This historical context adds a layer of political weight to the term, reminding us that the robot was born from fears of industrialization and class struggle, not just computer science. Critics might note that Čapek's robots were biochemical, not digital, but Easwaran's point stands: the archetype of the rebellious machine has always been a proxy for human societal anxieties.

From Specialized Tools to General Illusions

The commentary accelerates into the 20th and 21st centuries, where the abstract fears of fiction collided with the concrete reality of computing. Easwaran describes how Joseph Weizenbaum's 1960s program ELIZA, a simple chatbot mimicking a therapist, revealed a profound human vulnerability. "He ended up showing was that even though the therapy was simplistic enough to be implemented by a computer program of that era it was still tapping into something important and emotional for people," Easwaran writes. This is a critical insight: the danger isn't that the machine is intelligent, but that we are desperate to find intelligence in it.

As the narrative moves to the rise of Google and the modern tech giants, Easwaran observes that despite their massive data processing capabilities, these systems "never really felt to anyone like a general intelligence. It just felt to people like they had more and more information at their fingertips." The shift occurred with the introduction of voice assistants and, more significantly, the Transformer architecture in 2017. Easwaran notes that when OpenAI released GPT-3 and later GPT-4, "suddenly the public started paying attention." The author highlights a Microsoft research paper titled Sparks of Artificial General Intelligence, which claimed that these models might represent a new era. However, Easwaran frames this cautiously, suggesting that the "sparks" may be as much a result of our interpretation as the model's actual output.

"This suggested that in addition to the failure mode suggested by Frankenstein and RUR where artificial intelligence might have an inner life that isn't properly appreciated by humans, there's also a failure mode where humans project an inner life onto a device that doesn't really have it."

Bottom Line

Easwaran's most compelling argument is that our current panic over AGI is less about the technology's sudden arrival and more about a persistent human habit of projecting agency onto complex systems. The strongest part of this piece is its historical continuity, which effectively demystifies the "black box" of modern AI by showing it as the latest iteration of an ancient story. The biggest vulnerability, however, is the assumption that the underlying technology is just a new version of the same old tricks; the scale and speed of modern neural networks may indeed create emergent behaviors that ancient automata could never have imagined. Readers should watch for whether the "sparks" of general intelligence are real or just another reflection of our own desire to be understood.

Sources

AI literacy - lecture 8.2: Interpretations and misinterpretations of artificial general intelligence

by Kenny Easwaran · Kenny Easwaran · Watch video

this lecture considers the history and future of human reactions to artificial intelligence and when people have imagined something as artificial general intelligence or AGI I'll mostly focus on these three questions how have people imagined AGI over the centuries how do researchers think we might approach AGI and how have people reacted to non AGI that seems like AGI so let's start with the first question ever since the Bronze Age there are stories of people having built not just statues but statues that move themselves automata self-moving things many of these stories claim that the statues behaved like people both in Greece and in China there are stories in Antiquity of artificers building automata that made people believe they were real people even though they were importantly Limited in some ways datalist is said to have built a giant out of brass that he presented to King Minos of creit to guard the island and yanii is said to have presented an automaton to King mu of jaia that walked and sang and flirted with the ladies of the court now it's unlikely that anything as advanced as those descriptions of automata was ever built but given the surviving writings that we have from Antiquity including the work of hero of Alexandria depicting the use of steam and hydraulics and pumps and wheels and all sorts of other things it is likely that at least some self-moving statues were built and that these ancient stories just represent overenthusiastic ancient hype about artificial intelligence comparable to many things that we've experienced in recent decades unfortunately none of the ancient automata that were built have survived so we don't know what their actual capacities were though one little bit of ancient machinery was discovered on a shipwreck off the island of antira and it seems that this machine was probably part of a model of the motion of the planets in the sky now we don't know much about the further development of machines of this sort over the next several centuries we believe that whatever technology was present in Antiquity was lost in Europe but preserved in the Islamic World Indonesia around the 12th century Europeans started translating Arabic texts and including ones about mathematics and calculations creating words like algebra and algorithm from Arabic words as well as borrowing ideas like alchemy that eventually led to Modern Chemistry ...