← Back to Library

AI nerds are people who like everything

This piece cuts through the usual hype cycle to ask a question most industry observers avoid: what psychological drive actually fuels the relentless acceleration of artificial intelligence? Alberto Romero argues that the architects of our digital future are not merely profit-seekers or visionary saviors, but individuals motivated by a deep-seated need to overturn the social hierarchies that once excluded them. In an era where policy debates focus on regulation and safety, Romero's psychological profiling offers a startlingly different lens on why the technology is moving so fast, regardless of the human cost.

The Psychology of Unfiltered Creation

Romero begins by dismantling the common assumption that AI developers are driven primarily by greed. He suggests that while money and power are visible factors, they are merely the surface layer of a deeper, more peculiar worldview. "Normal people (whom they refer to as 'normies,' i.e., the majority of the population) misunderstand AI nerds," Romero writes, noting that journalists often miss the mark by focusing on executives rather than the engineers "painstakingly building the AI models in the lab." This distinction is crucial; it shifts the blame from corporate boardrooms to the specific cultural mindset of the technical elite.

AI nerds are people who like everything

The core of Romero's argument rests on a concept he calls "overexistence." He posits that these developers possess an undiscerning enthusiasm for any technological capability, regardless of its utility or harm. "Nerds are people who like things simply because they exist," Romero quotes Sam Kriss, using this to frame the industry's attitude toward everything from life-saving medical breakthroughs to the flooding of the internet with low-quality AI-generated video. The author argues that for this demographic, the mere fact that a machine can do something new is sufficient justification for doing it. "To the AI nerd, the fact that something is possible is enough justification for its existence," he observes. This framing is provocative because it suggests that the lack of ethical guardrails is not an oversight, but a feature of their psychology.

Critics might note that this characterization risks painting a diverse group of engineers with too broad a brush, potentially ignoring those who actively work on safety and alignment. However, Romero anticipates this by admitting the term is a "statistical fiction" that captures common traits rather than defining every individual. The strength of his argument lies in explaining the collective momentum of the industry, which often seems to move faster than societal consensus can keep up.

To the AI nerd, the fact that something is possible is enough justification for its existence.

The Trap of Infinite Abundance

Romero then connects this psychological trait to a dangerous economic fallacy: the belief that infinite technological abundance will automatically solve societal problems. He draws a sharp parallel to Goodhart's law, a principle from the 1970s stating that "when a measure becomes a target, it ceases to be a good measure." Romero applies this to the AI industry's obsession with scale, arguing that the pursuit of "more" has become an end in itself, detached from actual human flourishing. "Make more AIs that make more things, and you'll realize, perhaps too late, that 'having more things' is not what you wanted," he warns.

The author suggests that the industry's promise of a utopian future is a rationalization for the chaos currently being unleashed. He writes that "AI nerds claim that 'everyone for himself' equals 'all for all,'" glossing over the massive gaps between short-term disruption and long-term benefits. This is a compelling critique of the "move fast and break things" mentality, which Romero argues is now being applied to the very fabric of human cognition and social trust. He points out that the "unconditional abundance" being created leads to a "putrid epistemic landscape" and mental health crises, yet these costs are dismissed as temporary growing pains.

The argument here is particularly potent because it challenges the notion that technology is neutral. Romero implies that the specific drive for "unfiltered creation" is a maladaptive inclination that thrives in a world that values speed over stability. "In practice, 'liking things just because they exist' serves the AI nerd as a psychological proxy for a deeper psychological need: they enjoy seeing how the world drowns in chaos," he contends. This is a bold claim, suggesting that the chaos itself is a form of validation for those who feel they were previously marginalized.

Revenge of the Underdog

Perhaps the most controversial section of Romero's essay is his analysis of the emotional undercurrents driving this technological rush. He suggests that the push for AI is, at a subconscious level, a form of "ontological revenge" against a society that once valued social grace and aesthetic judgment over technical prowess. "The post-AI nerd imagines the abundant future as the revenge they couldn't exact on their own," Romero writes, describing a desire to reset the stakes so that the old social hierarchies no longer apply.

He invokes the sociologist Helmut Schoeck to explain this dynamic, noting that "The envious man does not so much want to have what is possessed by others as yearn for a state of affairs in which no one would enjoy the coveted object or style of life." Romero argues that AI developers are not trying to defeat scarcity in the traditional sense, but to create a world so chaotic and overloaded that the social advantages of the "cool kids" become meaningless. "If I don't belong, no one does," he summarizes this mindset. This reframes the entire AI race not as a quest for human betterment, but as a psychological coping mechanism for a specific demographic that feels alienated from mainstream culture.

If I don't belong, no one does.

This interpretation adds a layer of human tragedy to the policy debate. It suggests that the resistance to slowing down AI development is not just about economic incentives, but about a deep-seated need to validate a worldview that has long been dismissed. Romero writes that "AI, by nature so fuzzy and nebulous, acts as the perfect placeholder for such crazy delusions," allowing developers to project their own desires for a new order onto the technology. While this psychological reading is speculative, it provides a powerful explanation for why the industry seems so resistant to external criticism or calls for restraint.

Bottom Line

Romero's essay succeeds in shifting the conversation from the mechanics of AI to the motivations of its creators, offering a chillingly plausible explanation for the industry's relentless pace. The strongest part of his argument is the link between the psychological need for "overexistence" and the real-world consequences of unregulated technological expansion. However, the piece's vulnerability lies in its reliance on anecdotal observation rather than empirical data, leaving some of its psychological claims open to debate. As the administration and regulatory bodies grapple with how to govern this technology, understanding the deep-seated drives Romero describes may be just as critical as understanding the algorithms themselves.

Deep Dives

Explore these related deep dives:

Sources

AI nerds are people who like everything

This is the first part of a five-part essay on the psychology of AI nerds, a topic I’ve been wanting to explore for a while (I published a timid approximation recently).

Why “AI nerds”? I use the term liberally, but not derogatorily. It’s a shorthand to describe, under one overarching label that encompasses character, thinking patterns, interests, and goals, the psychology of those building AI for the industry at the highest level (arguably the weirdest demographic with the greatest power to shape our future). Concretely, I mean the kind of person who sees AI as an engine of unfathomable progress (their words), a vehicle toward utopia. Normal people see them as cultists, members of a new faith.

By virtue of variability and personal inclinations, however, the term is a statistical fiction, a stereotype: it captures some common traits that define the average AI person, but no individual can be faithfully encapsulated by the label. Many people working in AI don’t belong under this label, and others who don’t, do.

What’s my motivation to write this? Normal people (whom they refer to as “normies,” i.e., the majority of the population) misunderstand AI nerds. This leads to confusion when trying to answer one important question: why are they doing this AI thing the way they’re doing it? For instance, journalists tend to think AI nerds are moved by money and power, but that’s far from the whole story.

To the extent that money and power are involved, thinking solely in those terms is barely scratching the surface (thinking about money is thinking about executives and investors—the visible faces—not about the engineers and developers painstakingly building the AI models in the lab; and yet, without them, there’s no industry!). I hope to amend this critical misunderstanding.

What’s my goal from writing this? That you leave smarter than you came in about the people behind AI. I want to help you make sense of AI nerds and urge you to start thinking in terms of psychological traits rather than mundane incentives like power and money. AI nerds are shaping everyone’s future singlehandedly and unilaterally, so figuring them out is our last chance to prepare for that. Otherwise, any reasonable resistance or objection is dead in the water.

Do I have the authority to write this? I am confident the answer is yes. Two reasons: first, I live among AI nerds online, reading them, observing ...