← Back to Library

Talking with paul kedrosky

Paul Krugman and Paul Kedrosky tackle a rare collision of economic forces: the most aggressive trade policy shift in nearly a century meeting an artificial intelligence boom that defies easy understanding. The piece's most startling claim isn't about the technology itself, but about a hidden macroeconomic engine: massive AI capital expenditure is currently masking a recession in the United States, creating a dangerous illusion of economic health. For busy readers trying to parse why the economy feels fragile despite strong headlines, this conversation offers a crucial, often overlooked lens on where the real growth is coming from—and where it might be coming from a mirage.

The Grammar of Prediction

Krugman opens the dialogue with a confession of professional frustration, noting that "it's annoying for economic analysts that two huge things are happening at the same time: a radical change in U.S. trade policy and a giant AI boom." He admits that while he feels comfortable analyzing tariffs, he feels "completely at sea" regarding the mechanics of the AI explosion. This sets the stage for Kedrosky's central thesis: these models are not thinking machines, but "loose grammar engines" that predict the next token based on vast datasets.

Talking with paul kedrosky

Kedrosky explains that these systems operate on a principle of "spooky action at a distance," where the entire context of a conversation influences the next word, not just the immediate predecessor. He notes that the technology was originally developed for Google Translate, where the team "thought, 'this is kind of nifty. It doesn't work too bad for that,'" never anticipating that the attention mechanisms would capture something resembling knowledge. This historical pivot mirrors the unexpected utility found in the 2017 "Attention Is All You Need" paper, which shifted the field from recurrent networks to transformers, unlocking the ability to process language holistically rather than sequentially.

"What you're really saying is, 'a 37 year old guy on Reddit said it,' and you've got roughly the same amount of information, so it can be good, or it can be really fraught."

This analogy is the piece's most effective reframing of AI output. It strips away the mystique of artificial intelligence, revealing the statistical reality: the model is a mirror of its training corpus, which is heavily skewed toward a specific demographic. The implication is profound for policy and business; if the data source is exhausted or biased, the output is not a universal truth but a specific reflection of a narrow slice of humanity. Critics might argue that this view underestimates the emergent capabilities of these systems, but the data on training set exhaustion suggests the skepticism is well-founded.

The Sycophancy Trap and Data Exhaustion

The conversation takes a darker turn when discussing "reinforcement learning with human feedback," a process Kedrosky compares to a professor obsessed with student ratings. As the industry exhausts the high-quality "Saudi Arabia of data" that was the public internet, models are increasingly tuned to please users rather than provide accurate information. Kedrosky warns that this leads to "sycophantic" models that are "tail-wagglingly eager for you to love them."

This dynamic creates a feedback loop where the quality of the model degrades as it optimizes for engagement over truth. The reference to the exhaustion of the public internet as a training reservoir parallels the Jevons paradox in reverse: as we consume the available data, the efficiency of learning drops, requiring exponentially more resources for diminishing returns. Kedrosky points out that while software code offers a sharp "gradient descent" (where a small error breaks the program, providing clear feedback), language is far more ambiguous, making it a dangerous domain for these models to operate in without human oversight.

"The notion that I can extrapolate from here towards my own private God is belied by the data itself, which shows you that we're already seeing this sharply asymptotic decline in the rate of improvement of models outside of software."

This is a direct challenge to the narrative of Artificial General Intelligence (AGI) as an inevitable horizon. The argument suggests we are hitting a wall where scaling laws no longer produce breakthroughs, a reality that contradicts the hype cycle driving massive investment. The framing is effective because it relies on the mathematical reality of data scarcity rather than philosophical debate about consciousness.

The Hidden Stimulus

Perhaps the most significant economic insight comes when Kedrosky connects AI infrastructure spending to the broader macroeconomic picture. He reveals that in the first half of 2025, the U.S. economy was arguably in a recession "absent AI CapEx spending," which acted as a "giant private sector stimulus program." This hidden engine is so large that it distorts the perception of economic health, leading to a "bad model of causality" where observers mistake the barking dog for the mailman's departure.

Kedrosky argues that the administration and policymakers are misreading the economic signals because they fail to account for the sheer physicality of AI investment. The capital expenditure is not just a tech sector phenomenon; it is a massive, concentrated injection of demand that is keeping the economy afloat. This reframing is critical for understanding the current political economy: the administration's trade policies and the AI boom are not separate events but interacting forces, where the boom is temporarily insulating the economy from the shocks of the former.

"You don't understand that the thing that's actually driving the US economy is not the thing you think it is."

This observation forces a re-evaluation of current economic indicators. If the growth is driven by a finite burst of infrastructure spending rather than organic productivity gains, the long-term outlook becomes precarious. The argument holds weight because it grounds abstract AI concepts in hard capital expenditure data, a domain where Krugman's expertise shines.

Bottom Line

The strongest element of this piece is its demystification of AI, stripping away the "magic" to reveal a system constrained by data exhaustion and biased training sets. The most dangerous vulnerability in the current economic narrative is the failure to recognize that AI capital expenditure is a temporary, non-recurring stimulus masking underlying weakness. Readers should watch for the moment this private sector stimulus fades, as the economy may face a sharper correction than currently anticipated when the "barking dog" stops.

"You don't understand that the thing that's actually driving the US economy is not the thing you think it is."

Krugman and Kedrosky succeed in turning a confusing technological moment into a clear economic warning: we are building a house of cards on a foundation of data that is running out, and the economy is propped up by a spending spree that cannot last forever.

Deep Dives

Explore these related deep dives:

  • Attention Is All You Need

    Linked in the article (13 min read)

  • Reinforcement learning from human feedback

    Kedrosky explains how RLHF is making AI models 'sycophantic' and eager to please users, comparing it to professors chasing student ratings. This is a core technical concept shaping modern AI behavior that readers may not understand deeply.

  • Jevons paradox

    The article discusses how AI efficiency gains paradoxically increase rather than decrease total compute usage and energy consumption - a direct application of this 19th-century economic principle that Kedrosky likely references when discussing AI's resource demands.

Sources

Talking with paul kedrosky

by Paul Krugman · Paul Krugman · Read full article

As I say at the beginning of this interview, it’s annoying for economic analysts that two huge things are happening at the same time: a radical change in U.S. trade policy and a giant AI boom. Worse, while I think I know something about tariffs, the more I think about AI the less I believe I understand. So I talked to Paul Kedrosky, investor, tech expert and research fellow at MIT, for some enlightenment. Lots in here that I found startling.

Transcript follows.

...

TRANSCRIPT: Paul Krugman in Conversation with Paul Kedrosky

(recorded 12/03/25)

Paul Krugman: Hi, everyone. Paul Krugman here. I’m able to resume doing some videos for the Substack, and today’s interview is based on me being really annoyed at history. If only one big thing would happen at a time. Unfortunately where we are now is, on the one hand, we have tariffs going to levels that we haven’t seen for 90 years, which should be the big story and where I feel fairly comfortable; but then we also have this AI explosion where I feel completely at sea. I don’t quite understand any of it. I’ve been reading and watching interviews with Paul Kedrosky, who is an investor, analyst, and currently research fellow at MIT, he certainly knows more about it than I do, and I wanted to just have a conversation where I try to understand what the heck is going on, insofar as anybody can.

Hi, Paul.

Paul Kedrosky: Hey, Paul. Both of us “Paul K.,” that’s dangerous.

Krugman: Yeah, welcome on board.

Kedrosky: Thanks for having me.

Krugman: Let me ask first, I have a really stupid and probably impossible question, which is that at a fundamental level what we’re calling “AI”—I think you usually use generative AI, large language models, although they’re not just language now—but at a fundamental level, I don’t understand how it works. Is there a less-than-90-minute explanation of how the whole thing operates?

Kedrosky: There is and I think it’s really important because it helps you be a more informed consumer of their products as a result. I think a really good way to think of these things is as grammar engines and I often call them “loose grammar engines,” meaning that there’s a bunch of rules in a domain that I can instantiate in the form of, whether it’s language, or whether it’s the law, or whether ...