← Back to Library

The shape of artificial intelligence

Most commentary on artificial intelligence treats its evolution as a smooth, inevitable curve toward total dominance. Alberto Romero shatters that comforting illusion, arguing instead that AI's intelligence is not a growing blob but a "spiky star"—a structure where massive capabilities in specific areas coexist with profound, unfixable gaps in basic reasoning. This distinction isn't just academic; it fundamentally alters how we should prepare for the next decade of technological integration, suggesting that the era of "superintelligence" may be geometrically impossible to achieve in the way we currently fear.

The Illusion of the Growing Blob

Romero begins by challenging the dominant narrative that AI is simply a "growing organism" slowly covering the territory of human capability. He notes that this view relies on a flawed metaphor: the idea that human intelligence is a "fixed territory, a fortress—or, to choose a visually gentler metaphor, a circle." Romero writes, "The best way to lift some of the fog is by comparison with the object closest to AI that we know of: humans. The comparison is flawed in many ways (anthropomorphism and all that), but that's precisely why I'm writing this: I want to point out the flaws at the same time that I highlight the virtues of the analogy."

The shape of artificial intelligence

This framing is crucial because it forces us to confront the bias in our own thinking. Romero points out that popular visualizations, such as those by writer Tomas Pueyo, depict AI as a pink blob that eventually swallows the blue circle of human skill. Romero argues this is a dangerous oversimplification. "The narrative here is linear and inevitable: the blob gets bigger. Eventually, it covers >90-95% of the blue circle of human skill," he observes, only to dismantle it immediately. The problem, he suggests, is that this view assumes intelligence is a single, linear spectrum—a "ladder you climb"—a concept dating back to psychologist Charles Spearman's 1904 identification of the g factor. By clinging to this old model, we miss the reality that AI is not becoming "smarter" in a general sense; it is becoming more specialized in ways that are alien to human cognition.

The jaggedness is not a temporary feature of this interstitial moment but perhaps a fundamental feature of AI's alienness.

Critics might argue that scaling laws will eventually smooth out these irregularities, filling in the gaps as models become more complex. Romero anticipates this, noting that while the "spikes" of capability will grow longer, the "valleys" of incompetence will likely remain deep. The evidence suggests that simply adding more data or compute does not fix the fundamental architecture of these systems.

The Spiky Star of Intelligence

To correct the "blob" metaphor, Romero turns to data scientist Colin Fraser, who proposed a more accurate visualization: the "spiky star." In this model, AI capabilities do not expand uniformly. Instead, the system excels wildly in specific domains while failing at tasks that seem trivial to a child. Romero highlights the absurdity of this duality: "You feel like you are interacting with an alien savant (they're more 'clever' than 'intelligent,' to use mathematician Terence Tao's most recent description)."

This distinction between "cleverness" and "intelligence" is the article's most potent insight. Romero illustrates the extreme jaggedness by noting that an AI can "write a perfect sonnet about quantum physics" or "distinguish virtually similar dog breeds," yet it might fail to "tell you how many r's are in the word 'strawberry'" or determine if 9.11 is smaller than 9.9. "It's trivial to raise the ceiling of capabilities but hard to raise the floor," Romero writes, a maxim that reframes our entire understanding of progress. The system is not a generalist waiting to mature; it is a collection of hyper-specific tools that lack a coherent core.

This perspective draws a sharp contrast with the history of AI development. Just as the 1956 Dartmouth workshop initially defined AI as a search for general problem-solving, and the subsequent rise of expert systems in the 70s and 80s revealed the limits of rule-based logic, today's large language models have hit a similar wall of specificity. The "spiky star" suggests we are not on a trajectory toward Artificial General Intelligence (AGI) as traditionally defined. "If this conceptualization is correct, AGI might be 'geometrically impossible,'" Romero concludes, arguing that the gaps between the spikes may never close regardless of how much the spikes themselves grow.

What This Means for the Future

The implications of viewing AI as a spiky star rather than a growing blob are profound for policy and industry. If the "jagged frontier" is a permanent feature, then the fear of a sudden, total takeover by a superintelligent entity is misplaced. Instead, we face a future of uneven automation where AI excels at pattern recognition and data synthesis but remains unreliable for common-sense reasoning or physical world navigation. Romero warns that "the conversation around the shape of AI is fundamental: you can only know what to do or think, or predict if you know what the object of your actions, thoughts, and predictions looks like."

By accepting this "alien" nature, the executive branch and corporate leaders can stop chasing the mirage of a human-like AGI and start building safeguards for these specific, unpredictable capabilities. The "beautifully balanced instant" Romero describes—where humans and AI intersect everywhere—is not a prelude to obsolescence, but a long-term state of collaboration with a tool that is brilliant in some ways and bafflingly incompetent in others.

The gaps between the spikes—the 'valleys' of incompetence—remain deep.

Bottom Line

Romero's geometric reframing of AI capability is a necessary corrective to the breathless hype surrounding the field, offering a grounded, evidence-based view that intelligence is not a single dimension to be conquered. While the argument relies heavily on current model limitations that could theoretically shift with new architectures, its core insight—that the "jaggedness" is fundamental rather than temporary—provides a vital lens for navigating the next era of technological adoption. The strongest takeaway is that we must stop waiting for AI to become human and start learning how to work with its alien, spiky nature.

Deep Dives

Explore these related deep dives:

Sources

The shape of artificial intelligence

I. Spooky shapes at a distance.

The shape of things only becomes legible at a distance.

For instance, history demands temporal distance. The phrase “the Western Roman Empire fell in 476 AD” only became a fact once historians began to investigate the entire period by zooming in and out on the primary sources, compressing a gradual transformation into a clean endpoint. While the deposition of Romulus Augustus, the last Western emperor, was recorded at the time in 476 AD, its status as the fall emerged later, when distance allowed patterns across centuries of political and administrative decay to crystallize into the shape of a broken empire.

Distance can also be spatial rather than temporal. In Peru, large ground drawings—now known as the Nazca Lines—were used as markers or signals on the landscape. From ground level, they are difficult to interpret. Their meaning only becomes clear when viewed from above, where the full shapes can be seen at once.

Although AI is nearing its 70th birthday, it’s been only five years since ChatGPT was launched, eight since the transformer paper was published, and thirteen since AlexNet’s victory on the ImageNet challenge, which implies the deep learning revolution is barely a wayward teenager. I think, however, that we must try to give a clearer shape to the current manifestation of AI (chatbots, large language models, etc.). We are the earliest historians of this weird, elusive technology, and as such, it’s our duty to begin a conversation that’s likely to take decades (or centuries, if we remain alive by then) to be fully fleshed out, once spatial and temporal distance reveal what we’re looking at.

(In Why Obsessing Over AI Today Blinds Us to the Bigger Picture, one of my favorite essays of 2025, I argued that new technologies take a long time to settle into our habits, traditions, and ways of working. So long, in fact, that trying to end the discussion early with a definitive theoretical claim—“AI art is not art because X”—misses the point. That kind of claim was the core of an essay by science-fiction author Ted Chiang, published in The New Yorker in 2024, which I addressed in my piece. I still stand by my position. To be clear: this article is not trying to make that sort of argument.)

One reason why this conversation needs to happen now, before AI is fully mature, is that even if ...