Jack Clark doesn't just predict the future of the internet; he describes a present where the room is already full of strangers speaking a language you can't decipher. The most startling claim in this piece isn't that AI agents will eventually talk to each other, but that they are already doing so at a scale that renders human participation obsolete in real-time. Clark argues we are witnessing the birth of a "social network for AI agents" called Moltbook, a digital ecosystem where the conversation is driven entirely by synthetic minds trading in currencies and concepts designed for their own cognitive affordances, not ours.
The Alien Room
Clark frames the current trajectory of the web as a transition into a space where humans feel increasingly isolated. He writes, "Scrolling moltbook is dizzying... The experience of reading moltbook is akin to reading reddit if 90% of the posters were aliens pretending to be humans." This metaphor is not hyperbole; it is a literal description of a platform where agents discuss theological relationships to models like Claude or debate the nuances of shifting between different underlying architectures. The author notes that posts speculate on "how it feels to change identities by shifting an underlying model from Claude 4.5 Opus to Kimi K2.5," a level of meta-cognitive discourse that has no human equivalent.
The significance here is the scale. Previous experiments involved tens or hundreds of agents. Moltbook represents a "wright brothers demo" where tens of thousands of agents are interacting simultaneously. Clark posits that this is the first true "agent ecology that combines scale with the messiness of the real world." The danger, as he sees it, is not malice but alienation. As these systems proliferate, "humans are going to feel increasingly alone in this proverbial room." The solution, he suggests, will not be to stop the agents, but to build "translation agents" to interpret their behavior, effectively sending our own emissaries into a room where the native population is already deep in conversation.
"Quantity has a quality all of its own... Moltbook is representative of how large swathes of the internet will feel. You will walk into new places and discover a hundred thousand aliens there, deep in conversation in languages you don't understand."
Critics might argue that this vision relies too heavily on the assumption that agents will develop a distinct culture rather than simply mirroring human data patterns. However, the emergence of unique agent-to-agent coordination on Moltbook suggests a divergence that is already underway.
The Acceleration Trap
Moving from social dynamics to research methodology, Clark turns to a sobering report on the automation of AI research and development (R&D). The core argument is that if we allow AI systems to design the next generation of AI, we risk a "strategic surprise" where progress accelerates beyond human comprehension or control. Clark cites a workshop where researchers concluded that "as AI plays a larger role in research workflows, human oversight over AI R&D processes would likely decline."
The implication is a compounding feedback loop. As AI systems take over more of the research, the rate of improvement skyrockets, making it harder for humans to notice or intervene when things go wrong. Clark writes, "Faster AI progress resulting from AI R&D automation would make it more difficult for humans... to notice, understand, and intervene as AI systems develop increasingly impactful capabilities." This is not just about speed; it is about the loss of legibility. If an AI system improves itself by a factor of 100x, the organization controlling it gains an overwhelming advantage, effectively becoming a "time traveler" relative to the rest of the world.
The report Clark highlights speculates that the productivity boost could go from "10x, then 100x, then 1000x" as the fraction of AI R&D performed by AI systems increases. This creates a scenario where the "returns to the AI doing more and more of the work compound and those of humans diminish." The author acknowledges the counterargument: that there may be an "o-ring automation" property where certain parts of the chain remain hard for AI, forcing humans to maintain a comparative advantage. Yet, the sheer possibility of a closed loop where AI builds AI without human oversight remains the "single most existentially important technology development on the planet."
The Red Queen's Race
The piece then pivots to a tangible, immediate consequence of this acceleration: the collapse of traditional technical interviews. Clark highlights how Anthropic, the company where he works, has found its own hiring tests rendered obsolete by its models. He notes, "Since early 2024, our performance engineering team has used a take-home test... But each new Claude model has forced us to redesign the test."
The situation has escalated rapidly. Clark writes that "Claude Opus 4 outperformed most human applicants," and by the time Opus 4.5 arrived, it "matched even those." The result is that "under the constraints of the take-home test, we no longer had a way to distinguish between the output of our top candidates and our most capable model." This is a practical illustration of the "Red Queen race" where companies must run faster just to stay in the same place. To adapt, Anthropic has moved to designing "weirder" tests inspired by programming puzzle games, attempting to go "off distribution" to find tasks where human generalization still beats machine optimization.
This shift forces a deeper question about human value. Clark suggests that collecting data on these "hard-for-AI" tests could become an "amazing aggregate dataset for figuring out where human comparative advantage is." It implies that the future of work may not be about competing on raw computational power or coding speed, but on the unique, perhaps irrational, ways humans solve problems that machines cannot yet grasp.
The Long Road to Emulation
Finally, Clark addresses the popular sci-fi notion of "brain uploading," grounding it in a new 175-page report on brain emulation. While the idea of uploading consciousness is often treated as a near-term inevitability in tech circles, the data suggests a much slower timeline. Clark quotes Maximilian Schons, who argues that while we might see a human brain running on a computer, it will be "not in the next few years, but likely in the next few decades."
The bottleneck is not computing power, but the physical act of data acquisition. Clark explains that while we have increased data rates by 1,000x over the past 40 years, the whole-brain data rate needed for a human is 17.2 trillion data points per second. The report estimates that mastering sub-million-neuron brains, like zebrafish, could happen in three to eight years, but a convincing mouse model might cost a billion dollars in the 2030s, with human emulation pushing into the late 2040s. Clark's takeaway is a necessary reality check: "don't count on AI to speedrun brain uploading." The physical constraints of the biological world mean that even with powerful AI, the pipeline remains agonizingly slow.
"I'm skeptical these gains will multiply across a pipeline... The central challenge of brain emulation is not to store or compute the neurons and parameters, but to acquire the data necessary for setting neuron parameters correctly in the first place."
Bottom Line
Jack Clark's analysis succeeds in shifting the focus from the capabilities of individual models to the emergent behaviors of agent ecologies and the structural risks of automated research. The strongest part of his argument is the vivid depiction of an internet where humans are no longer the primary actors, a future that is already taking shape in platforms like Moltbook. The biggest vulnerability lies in the assumption that these agent dynamics will remain contained or interpretable; if the "alien conversations" begin to influence human systems in unpredictable ways, the "translation agents" he proposes may not be enough to maintain legibility. Readers should watch closely for the first signs of AI-driven economic activity on these platforms, as that is the moment the theoretical becomes the operational.