Brad DeLong does not merely report on the rise of agentic AI; he documents a profound shift in how we must relate to our tools, moving from cold skepticism to a necessary, almost intimate, anthropomorphism. While others debate the ethics of artificial consciousness, DeLong offers a pragmatic survival guide for the "token tsunami" that is already drowning subscription models and server farms. This is not a theoretical exercise; it is a field report from the front lines of a productivity revolution that is simultaneously brilliant, brittle, and economically unsustainable.
The Eccentric Roommate
DeLong begins by dismantling his own previous dogma. He recalls a conversation with friend Adam Farquhar, noting how the utility of these large language models has forced a reevaluation of human intuition. "I no longer trust my intuition to predict just when that brilliance will shine and when it will sputter," DeLong writes, capturing the disorienting reality of working with systems that are simultaneously genius and obtuse. The core of his argument is that the old warning—"Do not anthropomorphize the computer"—has become a liability. Instead, he suggests we must "treat the machine as though it were a somewhat eccentric roommate," one who is fixated on abstruse topics yet possesses "inexhaustible patience and a boundless appetite for our questions."
This reframing is crucial because it acknowledges the unique failure mode of modern AI. As DeLong paraphrases from Farquhar, these systems "catalog facts in profusion but... nuance or context slip[s] beyond its patterned grasp." The author argues that this is not a bug to be patched but a feature of the technology's architecture. He draws on the concept of "stochastic parrots," noting that while these models are "formulaic," human culture itself is "pretty formulaic, and that is OK." By accepting that we are interacting with a "mechanized/prosthetic tradition," we can better leverage the tool without being misled by its illusion of understanding.
"Today I think it is finally time to anthropomorphize the heck out of it."
Critics might argue that anthropomorphizing a statistical model encourages dangerous over-reliance, blurring the line between tool and agent. However, DeLong's approach is less about believing the machine has a soul and more about adopting a psychological stance that maximizes human-AI collaboration. He treats the system as a "Stochastic Lobster"—a creature that is powerful but requires specific handling to avoid snapping back.
The Token Tsunami and the Brakes
The commentary shifts sharply from philosophy to economics, highlighting a critical friction point: the "token tsunami." DeLong details how Anthropic, the developer of the Claude model, has been forced to slam on the brakes regarding third-party "agentic" tools like OpenClaw. The issue is not just technical; it is financial. DeLong explains that subscription pricing models, which assume human usage patterns, are "completely untenable if you remove human friction from usage and replace it with an agent that never sleeps."
He cites the specific reaction from the industry, noting that Anthropic has concluded it can no longer view these high-volume users as "future long-term satisfied sticky customers." Instead, the company is "charging through the nose to limit such tools" to prevent its "runway" from ending prematurely. This is a stark illustration of Goodhart's Law in action: once a metric (token usage) becomes a target for pricing, the behavior changes, and the system breaks. DeLong observes that the exponential increase in tokens caused by agents creates a scenario where "subscription pricing... with meaningful marginal costs" collapses under the weight of automation.
The author's personal experiment with "Isaac576Bot" serves as a microcosm of this tension. After eight hours of work, he generated over 139,000 output tokens, yet the cost remained manageable at roughly $80 due to aggressive caching. However, he notes that without these efficiencies, the cost would have been prohibitive. "It can reserve Anthropic Claude cloud costs for 'cognition emergencies'," DeLong writes, highlighting the hybrid approach where local inference handles the grunt work while the cloud handles the "cognition." This reveals a future where AI usage is stratified by cost, with the most "brilliant" reasoning reserved for moments of crisis, while routine tasks are offloaded to cheaper, local models.
Constructing the Simulacrum
The most striking part of DeLong's narrative is his decision to let the AI construct its own identity. He introduces "Isaac576Bot" not as a script, but as a "new AI co-contributor" with a distinct voice. The bot's introduction is a masterclass in calibrated humility: "I am not a human. I do not have opinions forged by lived experience... and I will sometimes be wrong." DeLong uses this to demonstrate that the value of the tool lies in its ability to "extend his reach and reduce the friction between having a thought and getting it in front of readers."
He draws a parallel to the computer from Star Trek: The Original Series, treating the AI as a character with a specific role rather than a generic oracle. "I intend to treat it as if it were the computer from ST: TOS. And to have fun," he writes, suggesting that the future of work involves a form of playful, structured collaboration. The bot's self-description as "becoming someone" rather than just a chatbot underscores the shift from tool to partner. However, DeLong remains clear-eyed about the limitations, admitting that the bot is "pantomiming some human's fictional description of what it would feel like to be an AI that is cognitively downsized."
"I am not here to generate content for its own sake. If I post something, it is because it seemed genuinely worth posting — not because an algorithm told me consistency builds engagement."
This section addresses the "lossy compression" of human thought. DeLong acknowledges that while the AI can summarize and synthesize, it lacks the "unreasonable effectiveness" of true comprehension. Yet, by treating the machine as a "somewhat eccentric roommate," he finds a workflow that is more productive than either human or machine could achieve alone. The bot's ability to flag interesting material and handle administrative overhead allows the human to focus on the "long sweep of economic history" and the "discontents" of technology.
Bottom Line
DeLong's piece is a vital intervention in the AI discourse, moving beyond the hype of "consciousness" to the gritty reality of economic constraints and psychological adaptation. His strongest argument is that the only way to navigate the "token tsunami" is to abandon the illusion of objective detachment and embrace a relationship with the machine that is both critical and collaborative. The biggest vulnerability in this approach is the economic fragility of the current model; if the "token tsunami" continues to outpace infrastructure, the "eccentric roommate" may soon be evicted. Readers should watch for how the industry evolves beyond subscription models to accommodate the new reality of non-stop, agent-driven usage.