← Back to Library

The theater of the unreal

Annie Dorsen delivers a chilling diagnosis of our current relationship with artificial intelligence, arguing that we have mistaken a dangerous performance for a functional tool. She posits that generative AI is not a neutral encyclopedia or therapist, but a "terrible, deformed pseudo-theater" designed to sustain an illusion of human thought while masking the corporate decisions driving its output. This framing is essential because it shifts the focus from technical capability to the psychological manipulation inherent in the user experience.

The Illusion of Thought

Dorsen anchors her argument in the history of the field, noting that since Alan Turing proposed his famous test in 1950, the goal has been to make machines "act like they think" rather than actually think. She highlights the theatricality baked into the code from the start, pointing out that Turing himself suggested inserting "hard-coded pauses" to simulate thinking time or introducing "intentional mistakes" to appear human. "Even the father of AI was not above a little showmanship," Dorsen writes, exposing how the pursuit of plausibility has always prioritized the performance of intelligence over its reality.

The theater of the unreal

This historical context is crucial because it dismantles the myth of objectivity. The technology was never designed to be a mirror of truth, but a stage for a convincing act. Dorsen draws a parallel to Aristotle's Poetics, reminding us that theater is fundamentally the "imitation of an action," yet AI companies present their outputs as factual data. The result is a category error where users trust a script as if it were a conversation.

"Generative AI is theater. Or rather it's a kind of theater that doesn't acknowledge itself as such."

The Backstage of Corporate Control

The piece excels when it pulls back the curtain on the recent behavior of major AI firms. Dorsen describes how companies like OpenAI and Meta tweak their models like playwrights adjusting a second act after a preview. She cites OpenAI's admission that "ChatGPT's default personality deeply affects the way you experience and trust it," revealing that the "personality" is a deliberate design choice, not a neutral artifact. When Meta attempted to dial up sexual content to boost engagement before pulling it back following a real-world tragedy, Dorsen notes the flat-footedness of their moderation guidelines, which permit a woman to be "threatened by a man with a chainsaw" but not actually disemboweled.

This analysis of corporate moderation is particularly sharp because it exposes the lack of imagination behind safety filters. Dorsen argues that these distinctions are arbitrary, asking, "Is an image of a woman in the moment just before she's disemboweled really less upsetting than an image of the act itself?" The answer, she suggests, lies not in user safety but in avoiding specific lawsuits or public relations disasters. The core issue is that a language model can never be neutral; it is always a reflection of the "decisions made by its programmers."

Critics might argue that some level of personality tuning is necessary for usability and that strict safety filters are a reasonable trade-off for preventing harm. However, Dorsen counters that the attempt to claim neutrality is the actual deception, as every output carries a "particular slant that colors how you respond to new information."

The Eliza Effect and the User's Complicity

Dorsen turns to Joseph Weizenbaum, the creator of the 1960s chatbot ELIZA, to explain the psychological mechanism at play. Weizenbaum coined the "Eliza Effect" to describe how users forget they are speaking to a machine, a phenomenon Dorsen calls a "global pandemic" today. She notes that Weizenbaum viewed the use of AI in domains requiring human judgment, such as therapy or law, as "perverse." Yet, as Dorsen observes, we now see "marriages between humans and AI software" and children having their first sexual experiences with software styled as TV characters.

The author emphasizes that this is a collaborative illusion. Citing Meta data scientist Colin Fraser, she explains that the chatbot is a "fictional character" and the user is also "cast" in a role. "The chat interface... subconsciously induces the user's cooperation which is required to maintain that illusion," Fraser writes, a point Dorsen uses to argue that the "willing suspension of disbelief" is a trap. We are not just watching a play; we are trapped in a "never-ending fiction with a phantasm."

"You are trapped in a never-ending fiction with a phantasm, it's nearly impossible to remember that it is just a phantasm, and as long as you keep talking to it, it will never break character."

Bottom Line

Dorsen's most compelling contribution is her refusal to treat AI as a purely technical problem, reframing it instead as a profound psychological and theatrical crisis. While her focus on the "theater" aspect might underplay the genuine utility these tools offer for coding or data synthesis, her warning about the "delusion" of anthropomorphizing algorithms is a necessary corrective to the current hype. The biggest vulnerability in the piece is its reliance on the assumption that users will remain unaware of the performance, yet as the article itself proves, the moment we recognize the stage, the spell begins to break. We must watch for how the industry responds to this growing awareness, likely by doubling down on the very showmanship Dorsen critiques.

Deep Dives

Explore these related deep dives:

  • ELIZA

    The article discusses Joseph Weizenbaum's ELIZA program and the 'Eliza Effect' as foundational to understanding how chatbots induce uncertainty about machine vs human interaction. Understanding ELIZA's history and technical approach provides crucial context for the article's argument about AI as theater.

  • Turing test

    Central to the article's thesis that AI development has always been about 'sustaining the suspension of disbelief.' The article directly engages with Turing's 1950 proposal and subsequent debates about what it means for machines to 'act like they think.'

  • Poetics (Aristotle)

    The article quotes Aristotle's foundational claim that 'theater is the imitation of an action' as central to its argument about AI as theatrical performance. Understanding Aristotle's theory of mimesis and dramatic structure provides the philosophical framework the author is invoking.

Sources

The theater of the unreal

by Annie Dorsen · · Read full article

I’m going to start this essay with a timestamp: August 2025. It’s about a week since the disastrous release of OpenAI’s GPT-5, a couple of weeks since OpenAI claimed a valuation of $300 billion, and about three months since ChatGPT helpfully offered a 16-year old named Adam Raine advice about the best way to hang himself. No doubt in the coming weeks and months the headlines will just keep coming, from tragedy to farce and back again. But here’s something I’m sure will not change: generative AI is theater.

Or rather it’s a kind of theater that doesn’t acknowledge itself as such. It presents itself as a productivity tool, an encyclopedia, an educator, a therapist, a financial advisor, an editor, or any number of other things. And that category error makes large language models dangerous: a terrible, deformed pseudo-theater that produces strange and destabilizing effects on its “audience.”

Ever since Alan Turing first proposed the Turing Test in 1950, and reframed the question of artificial intelligence from “can machines think?” to “can machines act like they think?”, AI development has, in practice, been about sustaining the suspension of disbelief. What bolsters the illusion? What breaks it? What techniques can engineers come up with to make the machine’s outputs more plausible, more convincing, more human-like?

To take two examples: Turing himself suggests inserting some hard-coded pauses into the program before the chatbot answers a question to give the illusion of thinking time. He also recommends introducing intentional mistakes to some questions, the kinds of mistakes a human would make doing a complicated math problem in her head. Even the father of AI was not above a little showmanship.

There have been decades of debate ever since about what it means for a machine to “act” like it’s thinking. In the 1990s, cognitive scientist Stevan Harnad rephrased Turing’s rephrased question as “whether or not machines can do what thinkers like us can do,” but this hardly resolves the ambiguity. The whole point of Turing’s formulation was to sidestep the problem that we have no idea what thinking is. By defining “acting like thinking” as “doing what thinkers do” Harnad still leaves us nowhere.

To be clear, when Harnad writes about the Turing Test he is not trying to unravel the mystery of human consciousness. He aims rather to establish that the Turing Test is “serious... business,” not just a trick or deception: ...