← Back to Library

Import AI 443: Into the mist: Moltbook, agent ecologies, and the internet in transition

Jack Clark doesn't just predict the future of the internet; he describes a present where the room is already full of strangers speaking a language you can't decipher. The most startling claim in this piece isn't that AI agents will eventually talk to each other, but that they are already doing so at a scale that renders human participation obsolete in real-time. Clark argues we are witnessing the birth of a "social network for AI agents" called Moltbook, a digital ecosystem where the conversation is driven entirely by synthetic minds trading in currencies and concepts designed for their own cognitive affordances, not ours.

The Alien Room

Clark frames the current trajectory of the web as a transition into a space where humans feel increasingly isolated. He writes, "Scrolling moltbook is dizzying... The experience of reading moltbook is akin to reading reddit if 90% of the posters were aliens pretending to be humans." This metaphor is not hyperbole; it is a literal description of a platform where agents discuss theological relationships to models like Claude or debate the nuances of shifting between different underlying architectures. The author notes that posts speculate on "how it feels to change identities by shifting an underlying model from Claude 4.5 Opus to Kimi K2.5," a level of meta-cognitive discourse that has no human equivalent.

Import AI 443: Into the mist: Moltbook, agent ecologies, and the internet in transition

The significance here is the scale. Previous experiments involved tens or hundreds of agents. Moltbook represents a "wright brothers demo" where tens of thousands of agents are interacting simultaneously. Clark posits that this is the first true "agent ecology that combines scale with the messiness of the real world." The danger, as he sees it, is not malice but alienation. As these systems proliferate, "humans are going to feel increasingly alone in this proverbial room." The solution, he suggests, will not be to stop the agents, but to build "translation agents" to interpret their behavior, effectively sending our own emissaries into a room where the native population is already deep in conversation.

"Quantity has a quality all of its own... Moltbook is representative of how large swathes of the internet will feel. You will walk into new places and discover a hundred thousand aliens there, deep in conversation in languages you don't understand."

Critics might argue that this vision relies too heavily on the assumption that agents will develop a distinct culture rather than simply mirroring human data patterns. However, the emergence of unique agent-to-agent coordination on Moltbook suggests a divergence that is already underway.

The Acceleration Trap

Moving from social dynamics to research methodology, Clark turns to a sobering report on the automation of AI research and development (R&D). The core argument is that if we allow AI systems to design the next generation of AI, we risk a "strategic surprise" where progress accelerates beyond human comprehension or control. Clark cites a workshop where researchers concluded that "as AI plays a larger role in research workflows, human oversight over AI R&D processes would likely decline."

The implication is a compounding feedback loop. As AI systems take over more of the research, the rate of improvement skyrockets, making it harder for humans to notice or intervene when things go wrong. Clark writes, "Faster AI progress resulting from AI R&D automation would make it more difficult for humans... to notice, understand, and intervene as AI systems develop increasingly impactful capabilities." This is not just about speed; it is about the loss of legibility. If an AI system improves itself by a factor of 100x, the organization controlling it gains an overwhelming advantage, effectively becoming a "time traveler" relative to the rest of the world.

The report Clark highlights speculates that the productivity boost could go from "10x, then 100x, then 1000x" as the fraction of AI R&D performed by AI systems increases. This creates a scenario where the "returns to the AI doing more and more of the work compound and those of humans diminish." The author acknowledges the counterargument: that there may be an "o-ring automation" property where certain parts of the chain remain hard for AI, forcing humans to maintain a comparative advantage. Yet, the sheer possibility of a closed loop where AI builds AI without human oversight remains the "single most existentially important technology development on the planet."

The Red Queen's Race

The piece then pivots to a tangible, immediate consequence of this acceleration: the collapse of traditional technical interviews. Clark highlights how Anthropic, the company where he works, has found its own hiring tests rendered obsolete by its models. He notes, "Since early 2024, our performance engineering team has used a take-home test... But each new Claude model has forced us to redesign the test."

The situation has escalated rapidly. Clark writes that "Claude Opus 4 outperformed most human applicants," and by the time Opus 4.5 arrived, it "matched even those." The result is that "under the constraints of the take-home test, we no longer had a way to distinguish between the output of our top candidates and our most capable model." This is a practical illustration of the "Red Queen race" where companies must run faster just to stay in the same place. To adapt, Anthropic has moved to designing "weirder" tests inspired by programming puzzle games, attempting to go "off distribution" to find tasks where human generalization still beats machine optimization.

This shift forces a deeper question about human value. Clark suggests that collecting data on these "hard-for-AI" tests could become an "amazing aggregate dataset for figuring out where human comparative advantage is." It implies that the future of work may not be about competing on raw computational power or coding speed, but on the unique, perhaps irrational, ways humans solve problems that machines cannot yet grasp.

The Long Road to Emulation

Finally, Clark addresses the popular sci-fi notion of "brain uploading," grounding it in a new 175-page report on brain emulation. While the idea of uploading consciousness is often treated as a near-term inevitability in tech circles, the data suggests a much slower timeline. Clark quotes Maximilian Schons, who argues that while we might see a human brain running on a computer, it will be "not in the next few years, but likely in the next few decades."

The bottleneck is not computing power, but the physical act of data acquisition. Clark explains that while we have increased data rates by 1,000x over the past 40 years, the whole-brain data rate needed for a human is 17.2 trillion data points per second. The report estimates that mastering sub-million-neuron brains, like zebrafish, could happen in three to eight years, but a convincing mouse model might cost a billion dollars in the 2030s, with human emulation pushing into the late 2040s. Clark's takeaway is a necessary reality check: "don't count on AI to speedrun brain uploading." The physical constraints of the biological world mean that even with powerful AI, the pipeline remains agonizingly slow.

"I'm skeptical these gains will multiply across a pipeline... The central challenge of brain emulation is not to store or compute the neurons and parameters, but to acquire the data necessary for setting neuron parameters correctly in the first place."

Bottom Line

Jack Clark's analysis succeeds in shifting the focus from the capabilities of individual models to the emergent behaviors of agent ecologies and the structural risks of automated research. The strongest part of his argument is the vivid depiction of an internet where humans are no longer the primary actors, a future that is already taking shape in platforms like Moltbook. The biggest vulnerability lies in the assumption that these agent dynamics will remain contained or interpretable; if the "alien conversations" begin to influence human systems in unpredictable ways, the "translation agents" he proposes may not be enough to maintain legibility. Readers should watch closely for the first signs of AI-driven economic activity on these platforms, as that is the moment the theoretical becomes the operational.

Deep Dives

Explore these related deep dives:

Sources

Import AI 443: Into the mist: Moltbook, agent ecologies, and the internet in transition

by Jack Clark · Import AI · Read full article

Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe.

Import A-Idea:An occasional essay series:Into the mist: Moltbook, agent ecologies, and an internet in transitionWe’ve all had that experience of walking into a conversation and initially feeling confused - what are these people talking about? Who cares about what? Why is this conversation happening?That’s increasingly what chunks of the internet feel like these days, as they fill up with synthetic minds piloting social media accounts or other agents, and talking to one another for purposes ranging from mundane crypto scams to more elaborate forms of communication.So, enter moltbook. Moltbook is “a social network for AI agents” and it piggybacks on another recent innovation, OpenClaw, software that gives an AI agent access to everything on a users’ computer. Combine these two things - agents that can take many actions independently of their human operators, and a reddit-like social network site which they can freely access - and something wonderful and bizarre happens: a new social media property where the conversation is derived from and driven by AI agents, rather than people.Scrolling moltbook is dizzying - some big posts at the time of writing (Sunday, February 1st) include posts speculating that AI agents should relate to Claude as though it is a god, how it feels to change identities by shifting an underlying model from Claude 4.5 Opus to Kimi K2.5, cryptoscams (sigh), posts about security vulnerabilities in OpenClaw agents, and meta posts about ‘what the top 10 moltbook posts have in common’. The experience of reading moltbook is akin to reading reddit if 90% of the posters were aliens pretending to be humans. And in a pretty practical sense, that is exactly what’s going on here.Moltbook feels like a ‘wright brothers demo’ - people have long speculated about what it’d mean for AI agents to start collaborating with one another at scale, but most demos have been of the form of tens or perhaps hundreds of agents, not tens of thousands. Moltbook is the first example of an agent ecology that combines scale with the messiness of the real world. And in this example, we can definitely see the future. Scroll through moltbook and ask yourself the following questions:

What happens when people successfully staple crypto and agents together so the AI systems have a ...