Noah Smith identifies a startling reversal in the AI industry's public strategy: the architects of automation are suddenly insisting that their machines will not make humans obsolete. This isn't just a change in tone; it is a desperate, calculated pivot to avoid populist backlash and potential government nationalization as public sentiment turns sharply against the technology.
The Great Rebranding
Smith opens by highlighting a fundamental contradiction at the heart of the industry's messaging. For years, the prevailing narrative was that artificial intelligence would render human labor unnecessary. Now, the very people building these systems are walking that back. "Something big happened in the world of AI the other day: Sam Altman, founder and CEO of OpenAI, and probably the person who's most commonly regarded as the face of the industry, declared that the purpose of AI is not to take people's jobs," Smith writes.
This shift is particularly jarring given the historical record. Smith points out that Altman himself once warned of "a new idle class" and predicted that "the price of many kinds of labor…will fall toward zero." The author notes that while Altman has always been somewhat more optimistic than his peers, the current rhetoric represents a stark departure from the "doomer-ish" predictions that once defined the sector. The industry is effectively trying to sell a product it previously claimed would destroy its own customer base.
"Basically every recent poll shows the American public turning very strongly against AI."
The motivation for this pivot is clear to Smith: the old sales pitch was a political disaster. The industry was effectively telling voters, "Our product's purpose is to put you and your descendants on welfare forever, and it may also wipe out your whole species." Smith argues that this was a "bad sales pitch, to put it mildly," and that the resulting public anger has created a vacuum that populists on both sides of the aisle are eager to fill.
The Political Reckoning
The commentary then shifts to the political consequences of the industry's initial arrogance. Smith observes that the souring mood among Independents is an "invitation to populists like Trump and Bernie to make political hay by reining in the industry." While the specific politicians mentioned in the source text are framed here as representatives of a broader political shift, the core dynamic is the rise of government oversight in response to public fear.
Smith details how the administration is now considering an executive order to create an AI working group and a formal review process for new models. This marks a "stark reversal" from the previous hands-off approach. The trigger appears to be the realization that these models possess dangerous capabilities, such as the ability to identify cybersecurity vulnerabilities that could lead to a "cybersecurity reckoning."
Critics might note that Smith somewhat glosses over the genuine security risks that justify this oversight, focusing heavily on the PR aspect. However, the author's point stands: the industry's failure to manage the narrative has invited the very regulation it sought to avoid. The threat is no longer just about job loss; it is about the potential for nationalization. "If AI models displace large swaths of the labor market, such that a handful of companies run most of the economy, then some kind of nationalization becomes potentially imperative," Smith quotes Samuel Hammond as saying.
The New Sales Pitch: The Human Touch
So, what is the new strategy? Smith breaks down the emerging arguments used to reassure the public. The first is the "task creation" theory, which suggests that AI will generate new kinds of work. The second is "induced demand," or Jevons' Paradox, where efficiency leads to increased consumption and thus more work. Smith cites Aaron Levie, CEO of Box, to illustrate this: "AI making it easy to produce more code will mean we start to apply code to far more parts of our businesses."
This argument relies on historical precedent. Smith reminds us that technologies have always destroyed specific occupations but usually created more demand for human labor in the long run. However, he acknowledges a lingering anxiety: this logic holds only until the technology becomes superior to humans at all tasks. To address this, the industry is converging on a new long-term promise: the value of the "human touch."
"The durable jobs of the future won't be about monitoring AI systems or prompt engineering. Those are transitional roles in the automated sector. The durable jobs will be in the relational sector, where the human element is the product itself."
Smith references Alex Imas to explain that future employment will likely center on care, hospitality, and bespoke services—roles where the human connection is the product. This reframing is clever. It moves the goalposts from "AI will do your job" to "AI will make your job more human." It is a much smarter political move than the previous narrative of human obsolescence.
Is It Just PR?
The final section of Smith's piece questions whether this is merely a cosmetic change or a genuine shift in development goals. He notes that while some researchers like Dario Amodei of Anthropic still preach the "job-pocalypse," others like Altman are positioning OpenAI as the "human-friendly alternative." Smith suggests that by repeatedly telling the public that AI will augment rather than replace, industry leaders might eventually start to believe it themselves, or at least steer their engineering toward that outcome.
"Describing AI as a normal technology — a successor to the steam engine and the automobile and the computer — is much smarter politics."
Smith admits that this is an uphill battle. The notion that AI is a "human-remover" is deeply ingrained in the public consciousness. Furthermore, the argument that we will always have "comparative advantage" over machines weakens as machines become cheaper and more capable than humans at almost everything. The promise of a "post-work" society where humans are paid simply to be human is a comforting story, but one that lacks a solid economic foundation if the technology truly becomes omnipotent.
Bottom Line
Smith's analysis is a sharp diagnosis of an industry in crisis, correctly identifying that the pivot away from "job-killing" rhetoric is a survival tactic against inevitable political backlash. The strongest part of the argument is the link between public perception and the looming threat of nationalization. However, the piece's biggest vulnerability is its reliance on the historical assumption that technology always creates more jobs than it destroys—a pattern that may not hold if AI achieves true general intelligence. Readers should watch whether this new "human-centric" messaging translates into actual product design or remains a hollow PR exercise before the next election cycle.