Noah Smith cuts through the usual partisan noise to argue that the Democratic Party's economic playbook is dangerously obsolete in the face of artificial intelligence. He posits a startling shift: instead of fighting for specific jobs or taxing the ultra-wealthy, the next progressive era must focus on "robustness"—creating an economy that functions regardless of whether AI destroys or creates work. This is not a standard policy proposal; it is a survival guide for a future where the rules of economics are being rewritten by algorithms.
The Failure of the Old Playbook
Smith begins by dismantling the progressive economic consensus that coalesced in the late 2010s. He argues that this program, which relied heavily on deficit spending to boost demand, was perfectly timed for the post-2009 era but catastrophically misaligned with the post-pandemic reality. "The progressive economic ideas of the 2010s were, in large part, a reaction to the Great Recession," Smith writes, noting that the diagnosis of "underemployment" was correct in 2009 but wrong by 2021. By the time the administration took office, the economy was overheating, not underperforming.
The author suggests that ignoring this shift had tangible political costs. He points out that "by pushing aggregate demand even higher with massive deficit spending, the Biden administration probably exacerbated that inflation." This inflation, he argues, contributed to a loss of public trust and electoral setbacks. The core failure was a refusal to adapt to a new macroeconomic environment where supply constraints, not lack of demand, were the primary bottleneck.
Furthermore, Smith critiques the strategy of subsidizing care industries to create jobs. He notes that "if you subsidize health care, education, and child care, the prices of these things will go up." While the intent was to make services affordable for consumers, the result was often a "giant deficit-funded make-work program" that failed to deliver cheap care. The promise to fund this through "billionaire taxes" never materialized, leaving the coalition with expensive, inefficient programs that lacked broad political support.
"AI is like a storm that's buffeting the whole economy; Democrats need to be the rock in that storm."
Abundance as a Shield Against Uncertainty
With the old model broken, Smith proposes a new framework centered on "Abundance." This is not merely about lowering prices; it is a strategic hedge against the radical uncertainty of the AI revolution. He argues that "AI will disrupt patterns of specialization and comparative advantage across the economy," making it impossible to predict which jobs will survive. Therefore, promising specific careers in healthcare or education is a flawed strategy.
Instead, the focus must shift to ensuring that the basics of life are cheap and plentiful. "If people know that they'll have enough housing, food, energy, and the other basics of life, the threat of losing their jobs will be far less dire," Smith explains. This approach aligns with the YIMBY (Yes In My Backyard) movement, expanding its logic from housing to all essential goods. The goal is to decouple human survival from the volatility of the labor market.
However, Smith identifies a critical flaw in a simple "abundance" strategy: AI itself is a voracious consumer of resources. He warns that "AI and human consumers are going to increasingly be in direct competition for resources — electricity, land, and water." If data centers consume the surplus energy and land meant for human use, the abundance agenda fails. To counter this, he proposes a "human reservation" policy, using zoning and land-use rules to "reserve some amount of land and other resources for human consumption." This ensures that the exponential growth of AI does not crowd out the physical needs of the population.
Critics might note that aggressive zoning reform faces immense local political resistance, and that mandating resource reservations could stifle the very innovation needed to lower costs. Yet, Smith's framing forces a necessary conversation about the physical limits of a digital economy.
The Case for a Sovereign Wealth Fund
Perhaps the most provocative element of Smith's commentary is his solution to the inevitable decline of labor's share of income. As AI substitutes for human labor, the owners of capital will capture an ever-larger slice of the pie. "It's easy to imagine a dark future where the lucky people who own the robots get everything," Smith writes. While traditional progressives look to income taxes, Smith argues this is logistically difficult and politically fragile.
He proposes a more structural solution: government ownership of the corporate system. "The government could use its tax revenue to acquire shares in companies," he suggests. This would create a permanent claim on future profits, similar to the Alaska Permanent Fund but scaled to the entire U.S. economy. "If Nvidia and OpenAI and Google end up eating more and more of the economy, and the government owns part of those companies, then the government can just redistribute part of those companies' profit to the American people."
Smith distinguishes this from the haphazard equity stakes taken by the current executive branch, arguing for a systematic approach. "The Democrats should simply systematize the process — have the U.S. sovereign wealth fund automatically buy a stake in the entire stock market," he urges. This would transform the AI boom from a source of inequality into a dividend for the entire populace, effectively socializing the gains of automation without nationalizing the industries themselves.
"Essentially, if AI is the new oil — a source of boundless wealth that needs very few human beings to produce — then a U.S. sovereign wealth fund should have a claim on that revenue stream."
Bottom Line
Smith's strongest contribution is his reframing of economic policy from a battle over distribution to a strategy of robustness against technological shock. His argument that the state must own a piece of the AI revolution to prevent a permanent labor crisis is both timely and structurally sound. However, the biggest vulnerability lies in the political feasibility of his "human reservation" policies and the sheer scale of capital required to build a meaningful sovereign wealth fund. Readers should watch for how this "abundance" framework evolves as the physical constraints of AI infrastructure become more apparent in the coming decade.