The conversation about artificial intelligence has become a mirror of our worst fears — and that's distorting what people actually need to know.
That's the argument Nate B Jones makes in a piece that challenges both the AI doom narrative and its optimistic counterpart. He's not dismissing concerns about job displacement, but he's pointing to something that neither side discusses: the gap between what AI can do and when it actually changes the economy.
Why the Doom Narrative Went Viral
A fictional scenario written as a memo from 2028 created one of the most dramatic market selloffs in recent memory. The piece came from Catrini Research and was framed as a speculative macro memo describing what would happen if AI capabilities keep compounding and companies rationally cut white-collar headcount to protect margins.
The scenario is vivid: the S&P drops 38% from its 2026 highs, unemployment hits 10.2%, and financial contagion spreads through the credit system. The mechanism is simple and emotionally resonant — it activates what many felt during 2008: the feeling that the system is fragile, that nobody in charge sees what's coming, and that smart money is already heading for the exits.
But here's what's driving the selloffs now. Seven doom narratives are circulating, and they're not about the technology itself. They're about the economics.
The most haunting line in the piece is this one: In 2008, the loans were bad on day one. In 2028, the loans were good on day one. The world just changed after the loans were written.
The reason these narratives spread so fast isn't because they're more accurate than the bullish arguments. It's because of one of the most robust findings in human psychology: negativity bias. We're evolutionarily wired to pay disproportionate attention to threats. A headline that says AI can crash the economy generates 10 to 50 times more engagement than a headline saying AI-driven deflation could increase real purchasing power for the median household.
Both headlines describe potential futures. One of them gets millions of views. The other one doesn't.
And that's the asymmetry distorting the information environment everyone uses to make career and investment decisions.
The Bull Case Nobody's Talking About
Economist Alex Emis at the University of Chicago Booth School of Business read the same intuitive arguments about AI-driven demand collapse that Catrini formalized as fiction — and then actually built a model to test it. When you model exactly the conditions Catrini describes, where labor share in the economy declines rapidly, where there's no consumption bounce after prices fall, where wealthy capital owners who own data centers don't spend more, where interest rates hit the floor and can't drop further and there's no policy response — yes, you get what Satrini predicted.
But Emis argues that if you assume all those conditions line up perfectly, the idea that government doesn't respond is kind of laughable. When things get bad enough, politicians figure out a way to get it done because they want votes. They realize they're in trouble if they don't get votes.
And there's another bullcase argument that's being overlooked. Michael Bloke wrote a direct response to the Catrini piece making this case: most consumer spending is in services — mortgage services, tax preparation, insurance brokerage, travel booking. These are tasks AI agents could make dramatically easier because they're fundamentally functions of complexity.
If you're sitting there asking where AI agents can impact the economy, it's more plausible they'll first compress costs for those services by 40 to 70 percent rather than replacing cobalt mining and ATM machines. And that compression returns 4 to 7,000 dollars in annual gains per median household — tax-free. No legislation required.
Is that money just going to evaporate? No. People will spend it. They'll put it into home mortgages, furniture, renovations, moving costs. It doesn't disappear. It goes back into the economy.
Bloke also points out something worth calling out: business formation in the US continues its high trend. The Census Bureau reported 532,000 new business applications in January of 2026 alone, up over 7 percent from December. One-person businesses have more leverage than ever before — skills, tools, radically lower overhead, and more reach — all thanks to AI.
This isn't theoretical. People are going from not coding at all to setting up a business and making real money from it. They feel motivated. They're starting formal businesses out of it.
The Missing Piece: Social Inertia
Both the doom narrative and the boom narrative assume AI capabilities translate incredibly rapidly into economic impact. The doom narrative assumes everyone gets fired. The boom narrative assumes rapid technical adaptation across society. Both assume the conversion rate from what AI can technically do to reorganized economy is fast.
It's not.
The reason that isn't true is the most underrepresented part of this conversation: deployment is not the same as adoption, and adoption is not the same as deep integration. And deep integration on its own is still not the same as economic impact.
Social inertia is a massive force in the economy, and it's dramatically underrepresented in every AI analysis — bull or bear.
Regulatory inertia: financial services firms that want to use AI for compliance work need approval from regulators who haven't finished writing the rules. Healthcare organizations need to navigate HIPAA and FDA clearance and institutional review boards. Government agencies run procurement cycles measured in years, not quarters.
Organizational inertia: the Satrini scenario assumes companies cut headcount rationally and rapidly as AI capabilities improve. But companies are not rational actors. Large organizations don't work that way. Headcount decisions filter through HR policies, employment law, union agreements, severance obligations, institutional knowledge preservation, management politics, and the simple fact that most executives have never managed an AI transition.
The gap between what AI can technically do and confidently reducing headcount is enormous. The speed of labor displacement outrunning the speed of technical adaptation — that's the real conversation nobody's having.
Bottom Line
The strongest part of this argument is identifying why the doom narrative dominates: it's not about accuracy, it's about psychology. The most interesting vulnerability is that both bull and bear cases assume AI translates rapidly into economic impact, but the evidence suggests they shouldn't. If you're making career or investment decisions based on the viral headlines right now, you're probably missing the part that actually matters — how slowly adoption actually works in the real economy.