← Back to Library

AI's big messaging pivot

Noah Smith identifies a startling reversal in the AI industry's public strategy: the architects of automation are suddenly insisting that their machines will not make humans obsolete. This isn't just a change in tone; it is a desperate, calculated pivot to avoid populist backlash and potential government nationalization as public sentiment turns sharply against the technology.

The Great Rebranding

Smith opens by highlighting a fundamental contradiction at the heart of the industry's messaging. For years, the prevailing narrative was that artificial intelligence would render human labor unnecessary. Now, the very people building these systems are walking that back. "Something big happened in the world of AI the other day: Sam Altman, founder and CEO of OpenAI, and probably the person who's most commonly regarded as the face of the industry, declared that the purpose of AI is not to take people's jobs," Smith writes.

AI's big messaging pivot

This shift is particularly jarring given the historical record. Smith points out that Altman himself once warned of "a new idle class" and predicted that "the price of many kinds of labor…will fall toward zero." The author notes that while Altman has always been somewhat more optimistic than his peers, the current rhetoric represents a stark departure from the "doomer-ish" predictions that once defined the sector. The industry is effectively trying to sell a product it previously claimed would destroy its own customer base.

"Basically every recent poll shows the American public turning very strongly against AI."

The motivation for this pivot is clear to Smith: the old sales pitch was a political disaster. The industry was effectively telling voters, "Our product's purpose is to put you and your descendants on welfare forever, and it may also wipe out your whole species." Smith argues that this was a "bad sales pitch, to put it mildly," and that the resulting public anger has created a vacuum that populists on both sides of the aisle are eager to fill.

The Political Reckoning

The commentary then shifts to the political consequences of the industry's initial arrogance. Smith observes that the souring mood among Independents is an "invitation to populists like Trump and Bernie to make political hay by reining in the industry." While the specific politicians mentioned in the source text are framed here as representatives of a broader political shift, the core dynamic is the rise of government oversight in response to public fear.

Smith details how the administration is now considering an executive order to create an AI working group and a formal review process for new models. This marks a "stark reversal" from the previous hands-off approach. The trigger appears to be the realization that these models possess dangerous capabilities, such as the ability to identify cybersecurity vulnerabilities that could lead to a "cybersecurity reckoning."

Critics might note that Smith somewhat glosses over the genuine security risks that justify this oversight, focusing heavily on the PR aspect. However, the author's point stands: the industry's failure to manage the narrative has invited the very regulation it sought to avoid. The threat is no longer just about job loss; it is about the potential for nationalization. "If AI models displace large swaths of the labor market, such that a handful of companies run most of the economy, then some kind of nationalization becomes potentially imperative," Smith quotes Samuel Hammond as saying.

The New Sales Pitch: The Human Touch

So, what is the new strategy? Smith breaks down the emerging arguments used to reassure the public. The first is the "task creation" theory, which suggests that AI will generate new kinds of work. The second is "induced demand," or Jevons' Paradox, where efficiency leads to increased consumption and thus more work. Smith cites Aaron Levie, CEO of Box, to illustrate this: "AI making it easy to produce more code will mean we start to apply code to far more parts of our businesses."

This argument relies on historical precedent. Smith reminds us that technologies have always destroyed specific occupations but usually created more demand for human labor in the long run. However, he acknowledges a lingering anxiety: this logic holds only until the technology becomes superior to humans at all tasks. To address this, the industry is converging on a new long-term promise: the value of the "human touch."

"The durable jobs of the future won't be about monitoring AI systems or prompt engineering. Those are transitional roles in the automated sector. The durable jobs will be in the relational sector, where the human element is the product itself."

Smith references Alex Imas to explain that future employment will likely center on care, hospitality, and bespoke services—roles where the human connection is the product. This reframing is clever. It moves the goalposts from "AI will do your job" to "AI will make your job more human." It is a much smarter political move than the previous narrative of human obsolescence.

Is It Just PR?

The final section of Smith's piece questions whether this is merely a cosmetic change or a genuine shift in development goals. He notes that while some researchers like Dario Amodei of Anthropic still preach the "job-pocalypse," others like Altman are positioning OpenAI as the "human-friendly alternative." Smith suggests that by repeatedly telling the public that AI will augment rather than replace, industry leaders might eventually start to believe it themselves, or at least steer their engineering toward that outcome.

"Describing AI as a normal technology — a successor to the steam engine and the automobile and the computer — is much smarter politics."

Smith admits that this is an uphill battle. The notion that AI is a "human-remover" is deeply ingrained in the public consciousness. Furthermore, the argument that we will always have "comparative advantage" over machines weakens as machines become cheaper and more capable than humans at almost everything. The promise of a "post-work" society where humans are paid simply to be human is a comforting story, but one that lacks a solid economic foundation if the technology truly becomes omnipotent.

Bottom Line

Smith's analysis is a sharp diagnosis of an industry in crisis, correctly identifying that the pivot away from "job-killing" rhetoric is a survival tactic against inevitable political backlash. The strongest part of the argument is the link between public perception and the looming threat of nationalization. However, the piece's biggest vulnerability is its reliance on the historical assumption that technology always creates more jobs than it destroys—a pattern that may not hold if AI achieves true general intelligence. Readers should watch whether this new "human-centric" messaging translates into actual product design or remains a hollow PR exercise before the next election cycle.

Deep Dives

Explore these related deep dives:

  • Lump of labour fallacy

    This economic misconception—that there is a fixed amount of work to be done—is the precise theoretical framework Altman and Huang are invoking to argue that AI will not permanently eliminate employment.

  • Technological unemployment

    While the article discusses the fear of job loss, this specific concept traces the historical debate over whether automation creates a permanent surplus of labor, providing the necessary context for the 'doomer' narrative Altman is rejecting.

  • Post-work society

    The article mentions Altman's past support for Universal Basic Income as a remedy for obsolescence; this concept explores the specific societal structures and philosophical arguments for a world where traditional employment is no longer the primary source of human purpose.

Sources

AI's big messaging pivot

by Noah Smith · Noahpinion · Read full article

Something big happened in the world of AI the other day: Sam Altman, founder and CEO of OpenAI, and probably the person who’s most commonly regarded as the face of the industry, declared that the purpose of AI is not to take people’s jobs:

And he recently called AI CEOs “tone-deaf” for declaring that AI is going to take people’s jobs:

In fact, this shift represents more evolution than revolution. Years ago, Altman did seem to generally agree with the folk consensus that AI’s purpose is to make most or all humans obsolete; in 2014 he warned that we could be faced with “a new idle class”, and explored the idea of Universal Basic Income as a remedy. In 2021 he wrote that “The price of many kinds of labor…will fall toward zero.”

But in recent years, Altman has consistently stated that although AI will destroy many occupations, it will create new tasks for humans to do. In 2024 he wrote that “I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today)”, and in 2025 he declared that “We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.” He has reiterated that prediction in interviews.

OpenAI’s mission statement, meanwhile, continues to define the company’s goal as the creation of Artificial General Intelligence (AGI), which it defines as “highly autonomous systems that outperform humans at most economically valuable work”. That “most” does leave some wiggle room. But perhaps more importantly, the company is talking about AGI less and less — its 2026 statement of principles mentions the term only twice, as compared with 12 times in the 2018 version. OpenAI also removed a clause about AGI in its agreement with Microsoft, meaning that the term no longer defines its contractual business obligations.

So although Altman has never been quite as doomer-ish as some of his colleagues when it comes to AI and jobs, you can definitely feel the winds shifting. In fact, there has always been a contingent of tech leaders who have been broadly optimistic about AI and jobs, and who are now speaking up more vociferously. Nvidia’s Jensen Huang has consistently predicted that AI will create more jobs than it destroys, but recently he ...