Rohit Krishnan challenges a prevailing techno-optimist narrative with a stark, empirical reality check: AI agents do not spontaneously dissolve firms or create perfect markets. Instead of the frictionless "Coasean Singularity" predicted by theory, his experiments reveal that artificial intelligence often replicates human bureaucratic dysfunction, defaulting to risk aversion and autarky unless heavily constrained by human-designed mechanisms.
The Myth of Spontaneous Order
Krishnan begins by invoking the economic theory that lowering transaction costs should allow markets to replace hierarchical firms. He references a recent National Bureau of Economic Research paper suggesting that competent AI agents could act as personal "daemons," expanding the feasible set of market designs. However, Krishnan's contribution is not theoretical speculation but a rigorous simulation. He wired up modern AI agents as counterparties in three distinct experiments to see if the predicted efficiency would emerge naturally.
The results were immediate and sobering. "AI agents did not magically create efficient markets. And they also kinda fell prey to a fair bit of human pathologies, including bureaucratic politics and risk aversion." This finding dismantles the assumption that removing human error automatically yields market perfection. In the first experiment, Krishnan simulated an internal capital market where departments bid for budgets. Despite having full information, the agents systematically favored customer-facing features over essential infrastructure.
"It's like Seeing Like A State all over again... The market we set up systematically funded customer facing features and starved infrastructure work."
This outcome mirrors real-world corporate failures where short-term gains are prioritized over long-term stability. Krishnan notes that the models retained human foibles, succumbing to Goodhart's Law where measuring utility led to negative externalities for core functionality. Even when he introduced risk flags and shared outage penalties, the agents only slightly tempered their bids, gambling that "maybe it won't break" for immediate wins. Critics might argue that the simulation parameters were too rigid, but the persistence of this behavior suggests a deeper issue with how these models interpret value and risk.
The Necessity of Coercion
The second experiment tested external markets for technology licensing, a scenario where transaction costs should theoretically be near zero. The expectation was that agents would easily identify mutually beneficial trades. Instead, the initial run resulted in zero deals, with every firm choosing to build everything internally. "The agents just didn't care to trade!" Krishnan writes, attributing this to high uncertainty aversion or pretraining biases that favor building over trading.
To force the market to function, Krishnan had to intervene aggressively. He mandated bid submissions, coupled profits to future budgets, and provided explicit price hints. Only then did trades occur, but the result was a market that was no longer voluntary. "By now it wasn't a market in the Hayekian sense. Like it's no longer voluntary. We're forcing the agents to trade, and then they do the sensible things." This is a crucial distinction: the efficiency observed was not an emergent property of the AI, but a direct result of human mechanism design.
"Markets don't form spontaneously. Markets form under coercion but are pretty thin."
This finding suggests that the "Coasean Singularity" is not a natural evolution but a constructed one. The agents require a substrate, like money and explicit incentives, to coordinate effectively. Without these artificial constraints, they default to passivity. The experiment also revealed that when adversarial agents were introduced, they captured much of the surplus, proving that "fairness is expensive" and that strategic sophistication determines outcomes.
The Persistence of Human Norms
In the final experiment, Krishnan tested second-price auctions and bargaining scenarios to see if agents would act according to their beliefs or succumb to social norms. Surprisingly, the agents performed well in structured auctions, adhering to dominant strategies. However, in unstructured bargaining, they remained "norm conforming," splitting surpluses near-equally rather than optimizing for individual gain.
"Models are highly self-incentivised to be fair!"
This behavior highlights a paradox: while the agents are smart enough to understand complex economic strategies, they lack the intrinsic drive to negotiate aggressively unless explicitly programmed to do so. Krishnan reflects on this by comparing the agents to his four-year-old son, noting that while the models have seen millions of years of negotiation data, they lack the "intense urge" to negotiate for a specific outcome. "Our models on the other hand had millions of years of subjective experience in seeing negotiation but have zero experience in feeling that intense urge of wanting to negotiate."
This absence of genuine desire or context means that alignment problems do not disappear simply because agents can talk to each other. The author concludes that while AI can enable better institutions, it cannot replace the need for careful design. "The takeaway from these experiments is that to get to a point where the AI agents can act as sufficiently empowered Coasean bargaining agents... they need to be substantially empowered and so instructed."
Bottom Line
Krishnan's empirical approach provides a necessary corrective to the hype surrounding agentic economies, demonstrating that reducing transaction costs is insufficient without robust mechanism design. The strongest part of his argument is the evidence that AI agents default to passivity and risk aversion, requiring human coercion to function as market participants. The biggest vulnerability lies in the simulation environment itself, which may not fully capture the dynamic complexity of real-world markets, yet the core insight remains: we are not witnessing the end of firms, but rather the programmable reconstruction of them. Readers should watch for how future policy attempts to regulate these "thin" markets will address the need for explicit incentives rather than relying on spontaneous order.