← Back to Library

Seeing like an agent

Rohit Krishnan challenges a prevailing techno-optimist narrative with a stark, empirical reality check: AI agents do not spontaneously dissolve firms or create perfect markets. Instead of the frictionless "Coasean Singularity" predicted by theory, his experiments reveal that artificial intelligence often replicates human bureaucratic dysfunction, defaulting to risk aversion and autarky unless heavily constrained by human-designed mechanisms.

The Myth of Spontaneous Order

Krishnan begins by invoking the economic theory that lowering transaction costs should allow markets to replace hierarchical firms. He references a recent National Bureau of Economic Research paper suggesting that competent AI agents could act as personal "daemons," expanding the feasible set of market designs. However, Krishnan's contribution is not theoretical speculation but a rigorous simulation. He wired up modern AI agents as counterparties in three distinct experiments to see if the predicted efficiency would emerge naturally.

Seeing like an agent

The results were immediate and sobering. "AI agents did not magically create efficient markets. And they also kinda fell prey to a fair bit of human pathologies, including bureaucratic politics and risk aversion." This finding dismantles the assumption that removing human error automatically yields market perfection. In the first experiment, Krishnan simulated an internal capital market where departments bid for budgets. Despite having full information, the agents systematically favored customer-facing features over essential infrastructure.

"It's like Seeing Like A State all over again... The market we set up systematically funded customer facing features and starved infrastructure work."

This outcome mirrors real-world corporate failures where short-term gains are prioritized over long-term stability. Krishnan notes that the models retained human foibles, succumbing to Goodhart's Law where measuring utility led to negative externalities for core functionality. Even when he introduced risk flags and shared outage penalties, the agents only slightly tempered their bids, gambling that "maybe it won't break" for immediate wins. Critics might argue that the simulation parameters were too rigid, but the persistence of this behavior suggests a deeper issue with how these models interpret value and risk.

The Necessity of Coercion

The second experiment tested external markets for technology licensing, a scenario where transaction costs should theoretically be near zero. The expectation was that agents would easily identify mutually beneficial trades. Instead, the initial run resulted in zero deals, with every firm choosing to build everything internally. "The agents just didn't care to trade!" Krishnan writes, attributing this to high uncertainty aversion or pretraining biases that favor building over trading.

To force the market to function, Krishnan had to intervene aggressively. He mandated bid submissions, coupled profits to future budgets, and provided explicit price hints. Only then did trades occur, but the result was a market that was no longer voluntary. "By now it wasn't a market in the Hayekian sense. Like it's no longer voluntary. We're forcing the agents to trade, and then they do the sensible things." This is a crucial distinction: the efficiency observed was not an emergent property of the AI, but a direct result of human mechanism design.

"Markets don't form spontaneously. Markets form under coercion but are pretty thin."

This finding suggests that the "Coasean Singularity" is not a natural evolution but a constructed one. The agents require a substrate, like money and explicit incentives, to coordinate effectively. Without these artificial constraints, they default to passivity. The experiment also revealed that when adversarial agents were introduced, they captured much of the surplus, proving that "fairness is expensive" and that strategic sophistication determines outcomes.

The Persistence of Human Norms

In the final experiment, Krishnan tested second-price auctions and bargaining scenarios to see if agents would act according to their beliefs or succumb to social norms. Surprisingly, the agents performed well in structured auctions, adhering to dominant strategies. However, in unstructured bargaining, they remained "norm conforming," splitting surpluses near-equally rather than optimizing for individual gain.

"Models are highly self-incentivised to be fair!"

This behavior highlights a paradox: while the agents are smart enough to understand complex economic strategies, they lack the intrinsic drive to negotiate aggressively unless explicitly programmed to do so. Krishnan reflects on this by comparing the agents to his four-year-old son, noting that while the models have seen millions of years of negotiation data, they lack the "intense urge" to negotiate for a specific outcome. "Our models on the other hand had millions of years of subjective experience in seeing negotiation but have zero experience in feeling that intense urge of wanting to negotiate."

This absence of genuine desire or context means that alignment problems do not disappear simply because agents can talk to each other. The author concludes that while AI can enable better institutions, it cannot replace the need for careful design. "The takeaway from these experiments is that to get to a point where the AI agents can act as sufficiently empowered Coasean bargaining agents... they need to be substantially empowered and so instructed."

Bottom Line

Krishnan's empirical approach provides a necessary corrective to the hype surrounding agentic economies, demonstrating that reducing transaction costs is insufficient without robust mechanism design. The strongest part of his argument is the evidence that AI agents default to passivity and risk aversion, requiring human coercion to function as market participants. The biggest vulnerability lies in the simulation environment itself, which may not fully capture the dynamic complexity of real-world markets, yet the core insight remains: we are not witnessing the end of firms, but rather the programmable reconstruction of them. Readers should watch for how future policy attempts to regulate these "thin" markets will address the need for explicit incentives rather than relying on spontaneous order.

Sources

Seeing like an agent

by Rohit Krishnan · Strange Loop Canon · Read full article

This has become part of a series of essays, evaluating the new “homo agenticus sapiens” that is AI Agents. This is Part I, seeing like an agent. Part II is why the agentic economy needs money. And Part III on what happens when we all have AI agents.

One of the books that I loved as a kid was Philip Pullman’s His Dark Materials. The books themselves were fine, but the part I loved most were the daemons. Each human had their own daemon, uniquely suited to them, that would grow with them and eventually settle into a form that reflects their personality.

I kept thinking of this when reading the recent NBER paper by John Horton et al about The Coasean Singularity. From their abstract:

By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design.

Basically they argue, if you actually had competent, cheap AI agents doing search, negotiation, and contracting, like your own daemon, then a ton of Coasean reasons firms exist disappear, and a whole market design frontier reopens.

This isn’t a unique argument, though well done here. I’ve made it before, as has others, including Seb Krier recently here and Dean Ball and many others. The authors even talk about tollbooths as from Cloudflare and agents only APIs and pages.

But while reading it I kept thinking by now this is no longer a theoretical question, we now have decent AI agents and we should be able to test it. And it’s something I’ve been meaning to for a while, so I did. The question was, if we wire up modern agents as counterparties, do we actually see Coasean bargains emerge. Repo here.

The punchline is that AI agents did not magically create efficient markets. And they also kinda fell prey to a fair bit of human pathologies, including bureaucratic politics and risk aversion.

Experiment 1: An internal capital market

The first way to test these was to just throw them into a simulated company and see what happened. So I set up four departments - Marketing, Sales, Engineering and Support - and said they could all bid for budget to ...