Noah Smith challenges the prevailing fatalism in Silicon Valley by arguing that even if artificial intelligence surpasses human capability in every single task, humans will still retain plentiful, high-paying jobs. This counterintuitive claim rests not on the hope that humans are uniquely creative or empathetic, but on the rigid economic reality of resource constraints and the law of comparative advantage. For a busy professional navigating the anxiety of automation, this is a vital recalibration: the threat isn't that AI will do everything, but that it will become too expensive to do everything.
The Engineer's Fatalism vs. Economic Reality
Smith begins by acknowledging the dominant narrative among the very people building these systems. He observes that "AI engineers, founders, and VCs are pretty much always working on automating human labor," leading them to a "melancholy, fatalism, and pride" where they view the displacement of workers as an inevitable outcome of their own success. This perspective assumes a simple supply-and-demand model where automating tasks shrinks the domain of human work, driving wages down as labor supply floods the remaining niches.
However, Smith dismantles this intuition by pointing to historical data. He notes that "the median American individual earned about 50% more in 2022 than in 1974," despite centuries of automation shrinking the agricultural and manufacturing sectors. The core of his argument is that "we invent new tasks for humans to do over time," leading to a continuous diversification of labor rather than a contraction. This historical resilience mirrors the patterns seen in the Great Famine of 1876–1878, where the sheer scale of human suffering was driven not by a lack of food production capacity, but by the catastrophic failure of distribution and resource allocation—a reminder that economic outcomes are often dictated by logistics and constraints, not just raw capability.
"The economic danger of AI isn't really that it'll take all our jobs; the danger is that it'll gobble up all the land and energy, leaving too little for human use."
Smith's framing here is crucial because it shifts the policy debate from "how do we retrain workers" to "how do we manage the physical limits of the technology." He argues that the fear of total obsolescence ignores the concept of producer-specific constraints. Just as a venture capitalist might hire a secretary even if they could type faster themselves, because the VC's time is better spent on deals, AI will face similar bottlenecks.
The Power of Comparative Advantage
The article's most potent section explains why "everyone — every single person, every single AI, everyone — always has a comparative advantage at something." Smith clarifies a common misconception: comparative advantage is not about who is better at a task (absolute advantage), but about who is better at a task relative to their other options.
He illustrates this with a hypothetical scenario where an AI is superior to a human doctor in every metric. Yet, if that same AI could generate significantly more value by acting as an electrical engineer, the opportunity cost of using it for medicine becomes too high. "The net value of using the AI as a doctor for that one-hour appointment is actually negative," Smith writes, because the compute required could be deployed elsewhere for greater return. In this dynamic, the human doctor retains the job not because they are better, but because their opportunity cost is lower.
This logic holds up well against the "horses" argument—the idea that humans will become like draft animals, obsolete once machines can pull plows. Smith counters that unlike horses, humans are not competing against machines for the same finite resource in the same way. The constraint on AI is compute and energy, whereas the constraint on humans is time. As long as compute remains a scarce, expensive resource, the market will naturally allocate it to the highest-value tasks, leaving lower-value (but still high-paying for humans) tasks to human labor.
Critics might note that this model assumes a perfectly fluid market where wages adjust instantly to reflect opportunity costs, ignoring the friction of retraining, geographic immobility, or the potential for AI owners to suppress wages through monopoly power. While the economic theory is sound, the transition period could be brutal if the "new tasks" don't emerge fast enough to absorb displaced workers.
"It doesn't matter how much compute we get, or how fast we build new compute; there will always be a limited amount of it in the world, and that will always put some limit on the amount of AI in the world."
Smith's argument gains further depth when considering the physical limits of Moore's second law, which suggests that the cost of computing power is rising as we push the boundaries of chip density and energy efficiency. If the cost of compute rises, the "AI wage" increases, making human labor even more competitive for a wider range of tasks. This suggests that the bottleneck isn't intelligence, but the physical infrastructure required to sustain it.
The Policy Implication: Limiting the Machine
The conclusion of Smith's piece is a call for regulation, but not the kind that limits innovation. Instead, he advocates for "some sort of laws to make sure that AI never eats up too much of the energy and land that humans need to live." He suggests that without constraints on data centers, the technology could indeed "gobble up all the land and energy," creating a scarcity that harms human welfare.
This is a nuanced position that avoids the Luddite trap of banning technology while acknowledging the physical reality of the digital economy. By framing the issue as a resource allocation problem rather than a "man vs. machine" conflict, Smith provides a path forward that protects human labor without stifling progress. The administration and the executive branch, in crafting future industrial policy, would do well to consider these physical constraints rather than focusing solely on the abstract potential of algorithms.
"Most of the technologists I know take an attitude towards this future that's equal parts melancholy, fatalism, and pride — sort of an Oppenheimer-esque 'Now I am become death, destroyer of jobs' kind of thing."
Smith's critique of the technologist mindset is sharp: they are so focused on the capability of their creations that they ignore the economics of their deployment. This blindness to opportunity cost is the very thing that makes their dystopian predictions self-fulfilling prophecies only if we fail to manage the resources that power the machines.
Bottom Line
Noah Smith's argument is a robust defense of human labor that relies on the unshakeable laws of economics rather than optimistic speculation about human uniqueness. Its greatest strength is reframing the AI threat from a displacement of skills to a competition for physical resources like energy and compute. However, the argument's vulnerability lies in its assumption that markets will efficiently allocate these resources; without proactive policy to manage the energy and land constraints Smith identifies, the transition could still be chaotic. The reader should watch for how the White House and regulatory bodies address the physical footprint of data centers, as this will be the true determinant of whether AI remains a tool for human prosperity or a drain on human resources.