← Back to Library

It’s not “bad marketing” from A.I. Companies

Matt Yglesias delivers a counterintuitive diagnosis of the artificial intelligence industry's public relations crisis: the alarming rhetoric isn't a calculated marketing blunder, but a reflection of genuine, deeply held beliefs among the technology's architects. In an era where corporate messaging is usually sanitized for mass consumption, Yglesias argues that the leaders of major AI firms are not trying to scare investors, but are instead convinced they are standing on the precipice of a world-altering transformation that could lead to human extinction or mass unemployment. This perspective shifts the debate from "how do we fix the spin?" to "how do we govern a technology whose creators are sincerely terrified of its success?"

The Sincerity of the Apocalypse

The core of Yglesias's argument dismantles the popular narrative that AI executives are engaging in fear-mongering to drive up stock prices or secure regulatory moats. He writes, "What A.I. company executives are saying about their products — that they might lead to human extinction and almost certainly will lead to large-scale permanent disemployment — is so obviously 'bad messaging' that I would really urge people to consider that it's not a 'message' at all." This is a crucial distinction. If the fear were merely a tactic, it would be easier to dismiss; because it appears to be a conviction, it demands a serious policy response.

It’s not “bad marketing” from A.I. Companies

Yglesias traces this mindset to the origins of the major players. He notes that "OpenAI was founded by people who sincerely held these beliefs long before they released GPT-2," and that the founders of Anthropic left specifically because they felt the former organization was too focused on commercial applications rather than the existential stakes. The author suggests that these leaders operate under a specific theory of change: "They believe, roughly speaking, that the rapid pace of A.I. improvement that we've seen over the past 10 years will accelerate because A.I. is now smart enough to meaningfully contribute to A.I. progress." This self-referential loop, where AI builds better AI, is the engine of their anxiety.

The commentary is strengthened by Yglesias's observation that even the most pragmatic figures in the industry understand the political cost of this messaging but feel powerless to change it. He points out that Daniela Amodei, a co-founder of Anthropic, previously worked in political communications for a highly successful congressman and "almost certainly understands that, from a messaging perspective, it would be better to be more reassuring and less 'this will change the world in unsettling ways.'" Yet, she and her colleagues persist. As Yglesias puts it, "It's a problem of sincere belief." This reframing forces the reader to confront the possibility that the "bad marketing" is actually a rational response to a perceived reality that the rest of the public has not yet accepted.

The Bubble of Capital and Complexity

The piece also addresses the structural disconnect between the people funding these technologies and the general public. A reader's question in the text highlights a growing concern: the capital requirements for modern AI are so vast that the pool of investors is tiny, and their values are detached from "normies." Yglesias engages with this by acknowledging the unique nature of the current funding environment. He notes that "the number of people who can write the check for the amount of money Anthropic etc need is very limited — and what those people care about is also very detached from what 'normies' care about almost by definition."

This creates a feedback loop where the technology is developed by a closed circle of true believers and funded by a narrow elite, potentially insulating the industry from broader societal concerns. Yglesias draws a parallel to the Theranos scandal, noting that "they received a lot of funding from people who have very little knowledge but a lot of money," though he clarifies that the AI founders are not necessarily frauds, but rather "true believers" in a specific, high-stakes future. The argument here is that the sheer technical complexity of Large Language Models and the theoretical underpinnings like the "attention mechanism" create a barrier to entry that prevents the average voter from understanding the pitch, leaving the narrative entirely in the hands of the engineers and their investors.

It's just genuinely the case that the core people at these companies are true believers.

Critics might argue that Yglesias gives too much credit to the sincerity of these executives, suggesting that the "extinction" narrative is indeed a strategic move to justify massive capital expenditure or to preempt regulation by framing the technology as too dangerous for anyone but the original creators to control. While the author cites Holden Karnofsky's "Most Important Century" to validate the seriousness of these beliefs, the possibility remains that the "apocalypse" is a useful fiction for maintaining market dominance.

The Limits of Centralization and Political Reality

While the primary focus is on AI, Yglesias weaves in a broader commentary on governance and public opinion, drawing a parallel to the housing crisis. He references a post by Ben Southwood noting that the U.K. government's "level of local land-use preemption far exceeds American YIMBYs' wildest dreams," yet this centralization has failed to solve the housing shortage. Yglesias uses this to illustrate a deeper point: "if the whole national electorate was hostile to market-rate housing, I don't know what would save us."

This connects back to the AI discussion. Just as British voters favor rent control over building new towns, the American public may be fundamentally hostile to the disruptive nature of AI, regardless of what the "centralized" tech elite believes is necessary. Yglesias writes, "I see a poll in which 71 percent of British people say they favor imposing rent control while only 47 percent favor building a new set of towns." He argues that institutional arrangements cannot easily overcome a national electorate that is "hostile to market-rate housing," and by extension, potentially hostile to the radical economic shifts AI promises.

The author also touches on the structural weaknesses of the American political system, noting that "mayors are generally more pro-housing than city council members" and that "at-large councils approve more housing than ones where the members are all district-based." This observation about incentives suggests that the current political landscape, where politicians are often beholden to narrow district interests rather than broader national needs, may be ill-equipped to manage the transition to an AI-driven economy. The disconnect between the "true believers" in Silicon Valley and the "hostile electorate" in the rest of the country creates a dangerous policy vacuum.

Bottom Line

Yglesias's most compelling contribution is the insistence that the AI industry's alarming rhetoric is not a marketing failure but a symptom of a genuine ideological divide between the technology's creators and the society it will impact. The strongest part of the argument is the evidence that even politically savvy figures within the industry feel compelled to maintain this apocalyptic narrative, suggesting it is rooted in conviction rather than calculation. However, the piece's biggest vulnerability is its potential underestimation of the strategic utility of fear; if the "true belief" narrative serves to consolidate power and capital, it may be both sincere and manipulative simultaneously. As the technology accelerates, the gap between the "country of geniuses in a data center" and the public's ability to understand or regulate it will likely widen, making this disconnect the defining political challenge of the coming decade.

Deep Dives

Explore these related deep dives:

  • YIMBY

    The article contrasts the UK's centralized land-use preemption with the American YIMBY movement to illustrate how different institutional structures fail to overcome deep-seated public hostility toward market-rate housing.

  • Attention (machine learning)

    The text questions how many investors truly understand the technical reality of the attention mechanism, using this specific concept to argue that the complexity of AI creates a disconnect between technical founders and the wealthy backers funding them.

Sources

It’s not “bad marketing” from A.I. Companies

by Matt Yglesias · Slow Boring · Read full article

Matt Yglesias delivers a counterintuitive diagnosis of the artificial intelligence industry's public relations crisis: the alarming rhetoric isn't a calculated marketing blunder, but a reflection of genuine, deeply held beliefs among the technology's architects. In an era where corporate messaging is usually sanitized for mass consumption, Yglesias argues that the leaders of major AI firms are not trying to scare investors, but are instead convinced they are standing on the precipice of a world-altering transformation that could lead to human extinction or mass unemployment. This perspective shifts the debate from "how do we fix the spin?" to "how do we govern a technology whose creators are sincerely terrified of its success?"

The Sincerity of the Apocalypse.

The core of Yglesias's argument dismantles the popular narrative that AI executives are engaging in fear-mongering to drive up stock prices or secure regulatory moats. He writes, "What A.I. company executives are saying about their products — that they might lead to human extinction and almost certainly will lead to large-scale permanent disemployment — is so obviously 'bad messaging' that I would really urge people to consider that it's not a 'message' at all." This is a crucial distinction. If the fear were merely a tactic, it would be easier to dismiss; because it appears to be a conviction, it demands a serious policy response.

Yglesias traces this mindset to the origins of the major players. He notes that "OpenAI was founded by people who sincerely held these beliefs long before they released GPT-2," and that the founders of Anthropic left specifically because they felt the former organization was too focused on commercial applications rather than the existential stakes. The author suggests that these leaders operate under a specific theory of change: "They believe, roughly speaking, that the rapid pace of A.I. improvement that we've seen over the past 10 years will accelerate because A.I. is now smart enough to meaningfully contribute to A.I. progress." This self-referential loop, where AI builds better AI, is the engine of their anxiety.

The commentary is strengthened by Yglesias's observation that even the most pragmatic figures in the industry understand the political cost of this messaging but feel powerless to change it. He points out that Daniela Amodei, a co-founder of Anthropic, previously worked in political communications for a highly successful congressman and "almost certainly understands that, from a messaging perspective, it would be better to be more ...