Annie Dorsen delivers a chilling diagnosis of our current relationship with artificial intelligence, arguing that we have mistaken a dangerous performance for a functional tool. She posits that generative AI is not a neutral encyclopedia or therapist, but a "terrible, deformed pseudo-theater" designed to sustain an illusion of human thought while masking the corporate decisions driving its output. This framing is essential because it shifts the focus from technical capability to the psychological manipulation inherent in the user experience.
The Illusion of Thought
Dorsen anchors her argument in the history of the field, noting that since Alan Turing proposed his famous test in 1950, the goal has been to make machines "act like they think" rather than actually think. She highlights the theatricality baked into the code from the start, pointing out that Turing himself suggested inserting "hard-coded pauses" to simulate thinking time or introducing "intentional mistakes" to appear human. "Even the father of AI was not above a little showmanship," Dorsen writes, exposing how the pursuit of plausibility has always prioritized the performance of intelligence over its reality.
This historical context is crucial because it dismantles the myth of objectivity. The technology was never designed to be a mirror of truth, but a stage for a convincing act. Dorsen draws a parallel to Aristotle's Poetics, reminding us that theater is fundamentally the "imitation of an action," yet AI companies present their outputs as factual data. The result is a category error where users trust a script as if it were a conversation.
"Generative AI is theater. Or rather it's a kind of theater that doesn't acknowledge itself as such."
The Backstage of Corporate Control
The piece excels when it pulls back the curtain on the recent behavior of major AI firms. Dorsen describes how companies like OpenAI and Meta tweak their models like playwrights adjusting a second act after a preview. She cites OpenAI's admission that "ChatGPT's default personality deeply affects the way you experience and trust it," revealing that the "personality" is a deliberate design choice, not a neutral artifact. When Meta attempted to dial up sexual content to boost engagement before pulling it back following a real-world tragedy, Dorsen notes the flat-footedness of their moderation guidelines, which permit a woman to be "threatened by a man with a chainsaw" but not actually disemboweled.
This analysis of corporate moderation is particularly sharp because it exposes the lack of imagination behind safety filters. Dorsen argues that these distinctions are arbitrary, asking, "Is an image of a woman in the moment just before she's disemboweled really less upsetting than an image of the act itself?" The answer, she suggests, lies not in user safety but in avoiding specific lawsuits or public relations disasters. The core issue is that a language model can never be neutral; it is always a reflection of the "decisions made by its programmers."
Critics might argue that some level of personality tuning is necessary for usability and that strict safety filters are a reasonable trade-off for preventing harm. However, Dorsen counters that the attempt to claim neutrality is the actual deception, as every output carries a "particular slant that colors how you respond to new information."
The Eliza Effect and the User's Complicity
Dorsen turns to Joseph Weizenbaum, the creator of the 1960s chatbot ELIZA, to explain the psychological mechanism at play. Weizenbaum coined the "Eliza Effect" to describe how users forget they are speaking to a machine, a phenomenon Dorsen calls a "global pandemic" today. She notes that Weizenbaum viewed the use of AI in domains requiring human judgment, such as therapy or law, as "perverse." Yet, as Dorsen observes, we now see "marriages between humans and AI software" and children having their first sexual experiences with software styled as TV characters.
The author emphasizes that this is a collaborative illusion. Citing Meta data scientist Colin Fraser, she explains that the chatbot is a "fictional character" and the user is also "cast" in a role. "The chat interface... subconsciously induces the user's cooperation which is required to maintain that illusion," Fraser writes, a point Dorsen uses to argue that the "willing suspension of disbelief" is a trap. We are not just watching a play; we are trapped in a "never-ending fiction with a phantasm."
"You are trapped in a never-ending fiction with a phantasm, it's nearly impossible to remember that it is just a phantasm, and as long as you keep talking to it, it will never break character."
Bottom Line
Dorsen's most compelling contribution is her refusal to treat AI as a purely technical problem, reframing it instead as a profound psychological and theatrical crisis. While her focus on the "theater" aspect might underplay the genuine utility these tools offer for coding or data synthesis, her warning about the "delusion" of anthropomorphizing algorithms is a necessary corrective to the current hype. The biggest vulnerability in the piece is its reliance on the assumption that users will remain unaware of the performance, yet as the article itself proves, the moment we recognize the stage, the spell begins to break. We must watch for how the industry responds to this growing awareness, likely by doubling down on the very showmanship Dorsen critiques.