← Back to Library

A.i. Progress is giving me writer’s block

Matthew Yglesias confronts a paradox that haunts modern policy analysis: how do you write about the present when the future is accelerating so fast that today's debates may be obsolete before publication? His most striking claim is that the current frenzy over whether large language models are "useful" or "hype" misses the forest for the trees, because the real story isn't about the tools we have now, but the exponential trajectory of recursive self-improvement that could render all current labor market models irrelevant.

The Illusion of the Present

Yglesias begins by dismantling the comfort of traditional policy forecasting. He notes that his initial idea—arguing that artificial intelligence could reverse the historical drain of talent from teaching by making white-collar work less lucrative—collapses under scrutiny. "Couldn't this same process significantly reduce the value of a traditional education?" he asks, channeling the skepticism of his editors and even the AI models themselves. The core of his argument is that while automation might eventually redirect human capital toward education, the timeline is too uncertain to build a policy column around.

A.i. Progress is giving me writer’s block

He observes that most public discourse is trapped in a narrow window. "Most A.I. debates are about the present," Yglesias writes, pointing out that arguments between skeptics and enthusiasts are really just disagreements about the current utility of specific models like Claude or ChatGPT. This framing is effective because it exposes a collective blind spot: we are arguing about the specs of a car while the engine is being redesigned to fly. The author suggests that if you believe in the possibility of superintelligence arriving within a decade, "you wouldn't be selling it to China or reassuring people about the enduring value of the human touch."

If you believe God-like superintelligence will arrive in the next decade, you wouldn't be selling it to China or reassuring people about the enduring value of the human touch.

Critics might note that this focus on the "exponential" future risks paralyzing action on immediate, solvable problems. By fixating on the singularity, we might ignore the very real, very present disruptions happening in software development and customer service right now.

The Exponential Trap

The piece shifts to a more technical, yet terrifying, observation about the nature of progress in AI labs. Yglesias highlights that the highest-paid staff at these companies aren't marketing current products; they are using current tools to build the next generation. "If that continues, the gap in capability between GPT-7 in 2031 and GPT-5 today will not be as large as the gap between GPT-5... and GPT-3," he writes. This comparison to the jump from GPT-3 to GPT-5 is chilling because it suggests that the next leap won't just be an improvement; it will be a qualitative transformation.

He draws a parallel to historical shifts in data availability, noting that fears about running out of training data were solved by synthetic data, and efficiency gains like those from DeepSeek didn't reduce demand but increased it. "It just showed us how to use it more efficiently, meaning it started being used more than ever," Yglesias explains. This relentless momentum makes the "dual-track" thinking he proposes necessary but agonizing. One must simultaneously prepare for a world where AI is just another productivity tool and a world where it fundamentally alters the human condition.

The author's willingness to entertain the "intelligence explosion" scenario is rare in mainstream policy writing. He admits that if we are facing a future of "swarms of super-genius A.I.s," then questions about police staffing or school accountability become moot. "Basically everything will be irrelevant if in a few years we have swarms of super-genius A.I.s," he states. This is a bold admission of uncertainty in a field that often pretends to have answers.

The Human Element in a Machine Future

Despite the overwhelming focus on the future, Yglesias returns to the human cost of the transition. He revisits his initial idea about teaching, noting that just as second-wave feminism drew talent away from classrooms, AI could theoretically pull it back. He references the historical context of women like Jeannette Rankin and Elizabeth Blackwell, who broke barriers long before the 1970s, to illustrate that social attitudes, not just policy, drive workforce shifts. "It seems likely to me that as artificial intelligence generates a sharp decline in the demand for major categories of white-collar work... we could see a reversal of that flow," he argues.

However, he also touches on the potential for social instability. "If white-collar work diminishes in its economic rewards and social prestige, that could create explosive politics driven by downwardly mobile office workers even if it's beneficial on net." This is a crucial counterpoint to the techno-optimist narrative. The transition won't just be a smooth reallocation of labor; it could be a source of deep political friction. The author suggests that while AI might help solve police recruiting shortages or improve education through personalized tools, the structural incentives of our institutions may not be ready to adapt.

If white-collar work diminishes in its economic rewards and social prestige, that could create explosive politics driven by downwardly mobile office workers even if it's beneficial on net.

The piece also touches on the political alignment of the executive branch. Yglesias notes that the current administration has aligned itself with AI boosters, favoring the export of chips and the growth of tech companies, yet this stance is ironic given that true believers in superintelligence wouldn't be so focused on current commercial advantages. This observation cuts through the political noise to reveal a deeper disconnect between policy and the technological reality.

Bottom Line

Yglesias's most powerful contribution is his refusal to choose between the "slow invention" and "singularity" narratives, instead forcing the reader to hold both possibilities in mind. The argument's greatest strength is its intellectual honesty about the limits of prediction, but its vulnerability lies in the potential for this uncertainty to become an excuse for inaction on the immediate labor market disruptions. Readers should watch for how policymakers attempt to regulate a technology that may evolve faster than any legislative cycle can manage.

Deep Dives

Explore these related deep dives:

Sources

A.i. Progress is giving me writer’s block

by Matthew Yglesias · Slow Boring · Read full article

Here’s an idea for an article that I had recently:

One of the most underrated aspects of education policy is the impact that second-wave feminism had on the K-12 workforce. It used to be the case that an enormous fraction of the smartest and most ambitious women in America were working as public school teachers, and were doing so at depressed wages because of limited opportunities for women to have white-collar careers. Some of this was formal, but a lot of it wasn’t. Jeannette Rankin entered Congress in 1917 and Elizabeth Blackwell graduated from medical school in 1849, so it’s not like women “couldn’t” have careers in politics or medicine before 1970. But they rarely did. And there wasn’t one specific formal policy change that unleashed the entire transformation of women’s professional opportunities. There were formal changes in public policy, of course, but the most important changes were the shifts in attitudes and social values over several generations.

And a second-order consequence of this was the steady erosion of human capital available in the teaching workforce.

And it seems likely to me that as artificial intelligence generates a sharp decline in the demand for major categories of white-collar work — a much more restrained claim than the idea that it will replace all jobs — we could see a reversal of that flow.

Large language models have many potential applications in the educational context, but it’s hard to see them operating as a replacement for a human teacher in the way that they could replace people who work in jobs that mostly involve typing on computers. That, in turn, would be an example of how even though the labor market disruptions associated with new technology can be painful, they also have upside. Automation of white-collar work isn’t just a productivity boost in those specific sectors; it could also lead a lot of the human capital that is currently deployed in fields like law and accounting to be redirected toward teaching young people, which would have its own benefits.

When I pitched this idea to Kate, though, she raised a good point: Couldn’t this same process significantly reduce the value of a traditional education? Similarly, when I asked Claude about this, it told me that the timeframes don’t line up correctly. It’s true that a downward shift in the relative earnings of white-collar professionals could improve teacher recruiting and retention. But ...