← Back to Library

Import AI 441: My agents are working. Are yours?

Jack Clark doesn't just report on artificial intelligence; he documents a fundamental shift in the human relationship with work, arguing that we are already living in a world where our cognitive output is being multiplied by silent, tireless digital colleagues. The most startling claim here isn't that AI is getting smarter, but that the most capable users are already feeling guilty for not delegating their leisure time to machines. This isn't science fiction; it is a report from the front lines of a productivity revolution that is reshaping the economy before we've even built the institutions to manage it.

The Multiplication of Labor

Clark opens with a vivid, almost surreal narrative of hiking while his agents scour thousands of research papers, compiling data and cross-referencing trends in his absence. He writes, "I am well calibrated about how much work this is, because besides working at Anthropic my weekly 'hobby' is reading and summarizing and analyzing research papers - exactly the kind of work that these agents had done for me." This personal calibration gives his argument weight; he isn't a distant observer but a practitioner who knows the grind. He describes these agents as "special operations ghosts who hadn't had a job in a while, bouncing up and down on their disembodied feet in the ethereal world, waiting to get the API call and go out on a mission."

Import AI 441: My agents are working. Are yours?

The psychological impact of this efficiency is profound. Clark notes a strange new guilt: "It's common now for me to feel like I'm being lazy when I'm with my family. Not because I feel as though I should be working, but rather that I feel guilty that I haven't tasked some AI system to do work for me while I play with Magna-Tiles with my toddler." This reframes the traditional anxiety about automation. It is no longer about losing a job, but about the moral imperative to leverage every available tool to maximize human potential. The argument is compelling because it highlights a rapid acceleration; as Clark puts it, "These agents that work for me are multiplying me significantly. And this is the dumbest they'll ever be."

Critics might argue that this vision of a "fleet of minds" ignores the potential for systemic fragility or the loss of human intuition when we outsource too much cognitive labor. However, Clark's point is that the technology is already here, and the question is no longer if we will scale, but how we will manage the transition before the next, more capable iteration arrives.

These agents that work for me are multiplying me significantly. And this is the dumbest they'll ever be.

Poisoning the Well and the Ecology of the Internet

The piece then pivots to the darker side of this new digital ecology: the emergence of adversarial tools designed to corrupt AI training data. Clark details "Poison Fountain," a service created by anti-AI activists to feed "correct-seeming but subtly incorrect blobs of text" to web crawlers. The motivation is explicit and alarming: "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species. In response to this threat we want to inflict damage on machine intelligence systems."

This section is crucial because it moves the debate from abstract safety concerns to active, technical warfare. Clark argues that the internet is becoming a "predator-prey ecology" where humans, scrapers, and AI agents coexist in a fragile balance. He suggests that tools like Poison Fountain represent a desperate attempt to "tip the balance in this precarious ecology, seeking to inject things into this environment which make it more hospitable for some types of life and less hospitable for others." The implication is that the open internet, once a resource for human knowledge, is becoming a contested battlefield where data integrity is the primary weapon.

Building Institutions for a Hypercapable World

Perhaps the most significant portion of Clark's analysis is his synthesis of Eric Drexler's new framework for managing superintelligence. Drexler, a pioneer in nanotechnology, argues against the popular narrative of a single, monolithic AI entity. Instead, he proposes viewing AI as a "pool of resources, not a creature." Clark highlights Drexler's insight that "Compound, multi-component AI systems have become dominant," and that the future lies in building human institutions that can direct these diverse systems.

The core of Drexler's argument, as presented by Clark, is that we should not try to control a single god-like AI, but rather design institutions where AI handles planning and execution while humans retain decision-making authority. "Consider how institutions tackle ambitious undertakings," Drexler writes, noting that "no single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon." The parallel is striking: just as we built complex systems to reach the moon, we must build complex institutions to harness AI.

Clark emphasizes that this approach allows for "structured transparency" and "defensive stability," creating a world where "rapid, coordinated deployment of verifiably defensive systems at scales that make offense pointless." This is a powerful counter-narrative to the doom-and-gloom of AI apocalypse scenarios. It suggests that the solution to the risks of AI is not to stop its development, but to accelerate the development of the governance structures that can contain and direct it. As Clark observes, "The less we build that stuff, the more the character of these AI systems will condition our view of what is optimal to do."

Centaur Mathematicians and the Expansion of Knowledge

Finally, Clark celebrates a concrete example of this new collaboration: a recent mathematical proof developed by researchers from the University of British Columbia, Stanford, and Google DeepMind in close partnership with AI tools. The proof was not generated by a machine alone, nor by humans alone, but through an "iterative human/AI interaction" where the AI provided solutions to simpler problems that humans then generalized. Clark describes the result as "a genuine combination of synthesis, retrieval, generalization and innovation."

This section serves as a hopeful capstone to the piece. It moves beyond the theoretical to show that "humans and machines, expanding and exploring the pace of knowledge for all" is already happening. Clark captures the emotional resonance of this moment, calling it "impenetrable yet intoxicating" and noting the "grand poetry and joy" of "highly evolved apes working with a synthetic intelligence they've built out of math and logic." It is a reminder that despite the fears of displacement, there is a profound opportunity for human intellect to be amplified, not replaced.

Bottom Line

Jack Clark's most compelling argument is that the era of AI is not a future event to be feared, but a present reality that is already multiplying human labor and reshaping our psychology. The piece's greatest strength is its refusal to treat AI as a singular entity, instead framing it as an ecological force that requires new institutions to manage. The biggest vulnerability remains the speed of this transition; as Clark admits, the agents are already working, and the institutions to guide them are still being built, leaving a dangerous gap between capability and control.

The less we build that stuff, the more the character of these AI systems will condition our view of what is optimal to do.

Sources

Import AI 441: My agents are working. Are yours?

by Jack Clark · Import AI · Read full article

Welcome to Import AI, a newsletter about AI research. Import AI runs on arXiv and feedback from readers. If you’d like to support this, please subscribe.

Import A-IdeaAn occasional essay series:My agents are working. Are yours?As I walked into the hills at dawn I knew that there was a synthetic mind working on my behalf. Multiple minds, in fact. Because before I’d started my hike I had sat in a coffee shop and set a bunch of research agents to work. And now while I hiked I knew that machines were reading literally thousands of research papers on my behalf and diligently compiling data, cross-referencing it, double-checking their work, and assembling analytic reports.

What an unsteady truce we have with the night, I thought, as I looked at stars and the dark and the extremely faint glow that told me the sun would arrive soon. And many miles away, the machines continued to work for me, while the earth turned and the heavens moved.

Later, feet aching and belly full of a foil-wrapped cheese sandwich, I got back to cell reception and accessed the reports. A breakdown of scores and trendlines for the arrival of machine intelligence. Charts on solar panel prices over time. Analysis of the forces that pushed for and against seatbelts being installed in cars. I stared at all this and knew that if I had done this myself it would’ve taken me perhaps a week of sustained work for each report.

I am well calibrated about how much work this is, because besides working at Anthropic my weekly “hobby” is reading and summarizing and analyzing research papers - exactly the kind of work that these agents had done for me. But they’d read more papers than I could read, and done a better job of holding them all in their head concurrently, and they had generated insights that I might have struggled with. And they had done it so, so quickly, never tiring. I imagined them like special operations ghosts who hadn’t had a job in a while, bouncing up and down on their disembodied feet in the ethereal world, waiting to get the API call and go out on a mission.

These agents that work for me are multiplying me significantly. And this is the dumbest they’ll ever be.

This palpable sense of potential work - of having a literal army of hyper-intelligent loyal colleagues ...