← Back to Library

The shape of AI: Jaggedness, bottlenecks and salients

Ethan Mollick challenges the prevailing narrative that artificial intelligence will soon render human labor obsolete, arguing instead that AI's capabilities are defined by a "jagged frontier" where superhuman performance in one area coexists with baffling incompetence in another. This is not a story about a smooth curve of improvement, but a landscape of sudden lurches and stubborn bottlenecks that will dictate the future of work more than raw processing power ever will.

The Jagged Reality of Capability

Mollick begins by dismantling the intuition that AI difficulty maps to human difficulty. He notes that in the "ancient AI days of 2023," he and his colleagues coined the term "Jagged Frontier" to describe how these systems can be "superhuman at differential medical diagnosis or good at very hard math... and yet still be bad at relatively simple visual puzzles or running a vending machine." This framing is crucial because it explains why the technology feels so unpredictable to users. The system is not merely "dumber" than a human; it is operating on a completely different, often illogical, axis of competence.

The shape of AI: Jaggedness, bottlenecks and salients

While some futurists like Tomas Pueyo envision a future where the AI frontier simply outpaces human limitations, Mollick pushes back. He suggests that the jaggedness might be permanent, creating a scenario where "we get supersmart AIs which never quite fully overlap with human tasks." This is a sobering correction to the hype cycle. If the technology cannot learn from new tasks permanently—a key weakness he identifies—then the promise of a fully autonomous agent remains distant. The argument gains depth when viewed through the lens of historical "reverse salients," a concept from historian Thomas Hughes describing how progress often stalls on a single technical problem holding back an entire system. Just as early electrical grids were held back by a lack of efficient transmission, AI is currently held back by specific, non-obvious weaknesses.

"Jaggedness creates bottlenecks, and bottlenecks mean that even very smart AI cannot easily substitute for humans."

The Migration of Bottlenecks

The most compelling part of Mollick's analysis is the distinction between technical bottlenecks and institutional ones. He argues that "a system is only as functional as its worst components," and currently, those components are often things we don't think of as "intelligence." For instance, even if an AI can identify drug candidates faster than any human, the process is stalled by the need for "actual human patients who take actual time to recruit, dose, and monitor." Here, the bottleneck migrates from intelligence to institutions, and as Mollick rightly observes, "institutions move at institution speed."

This is a vital insight for busy leaders who might expect immediate ROI from AI integration. The technology may be ready, but the surrounding ecosystem is not. Mollick illustrates this with a study where an AI reproduced twelve years of medical review work in two days, yet still required human intervention for "edge cases" like accessing supplementary files or emailing authors. The AI handled the heavy intellectual lifting but failed at the mundane, human-centric tasks that make up less than 1% of the work yet prevent full automation. Critics might note that this 1% gap could close quickly with better prompting or multimodal capabilities, but Mollick's point stands: the last mile of automation is often the hardest.

The Power of the Lurch

Despite these limitations, Mollick predicts that progress will not be linear but will come in "lurches" when a specific reverse salient is finally overcome. He points to Google's recent image generation advances as a prime example. For years, the inability to generate coherent images was a bottleneck that prevented AI from creating useful visual presentations. Once that specific weakness was addressed, the entire system jumped forward. Suddenly, AI could generate slide decks that were not just text-based code, but visually flexible, style-aware documents.

"When one breaks, everything behind it comes flooding through."

This dynamic suggests that the next wave of disruption won't come from general intelligence, but from solving a specific, narrow problem that unlocks a flood of new applications. Whether it is memory, real-time learning, or physical interaction, the moment a lab fixes a reverse salient, the landscape changes overnight. However, this also means that for every bottleneck removed, the jagged frontier shifts, likely creating new edges where human expertise remains essential. The jobs of consultants and designers, for example, rely on "unwritten rules" and "buy-in" from parties involved—tasks that remain firmly outside the current AI frontier.

Bottom Line

Mollick's strongest contribution is shifting the focus from benchmark scores to the specific bottlenecks that actually constrain real-world utility. His argument that institutions, not just algorithms, will determine the pace of adoption is a necessary reality check for the industry. The biggest vulnerability in this view is the assumption that certain human-centric tasks will remain permanent reverse salients; history shows that what seems impossible today often becomes trivial tomorrow. Readers should watch not for when AI becomes generally smarter, but for which specific bottleneck breaks next, as that will signal the next sudden lurch in capability.

Deep Dives

Explore these related deep dives:

  • Reverse salient

    The article explicitly references Thomas Hughes's concept of 'reverse salients' - the single technical or social problem holding back a system from advancing. This systems theory concept from the history of technology is directly relevant and likely unfamiliar to most readers.

  • Theory of constraints

    The article's central argument about bottlenecks limiting AI capability directly parallels Goldratt's Theory of Constraints from operations management. This would provide readers with a formal framework for understanding why 'a system is only as functional as its worst components.'

Sources

The shape of AI: Jaggedness, bottlenecks and salients

by Ethan Mollick · One Useful Thing · Read full article

Back in the ancient AI days of 2023, my co-authors and I invented a term to describe the weird ability of AI to do some work incredibly well and other work incredibly badly in ways that didn’t map very well to our human intuition of the difficulty of the task. We called this the “Jagged Frontier” of AI ability, and it remains a key feature of AI and an endless source of confusion. How can an AI be superhuman at differential medical diagnosis or good at very hard math (yes, they are really good at math now, famously outside the frontier until recently) and yet still be bad at relatively simple visual puzzles or running a vending machine? The exact abilities of AI are often a mystery, so it is no wonder AI is harder to use than it seems.

I think jaggedness is going to remain a big part of AIs going forward, but there is less certainty over what it means. Tomas Pueyo posted this viral image on X that outlined his vision. In his view, the growing frontier will outpace jaggedness. Sure, the AI is bad at some things and may still be relatively bad even as it improves, but the collective human ability frontier is mostly fixed, and AI ability is growing rapidly. What does it matter if AI is relatively bad at running a vending machine, if the AI still becomes better than any human?

While the future is always uncertain, I think this conception misses out on a few critical aspects about the nature of work and technology. First, the frontier is very jagged indeed, and it might be that, because of this jaggedness, we get supersmart AIs which never quite fully overlap with human tasks. For example, a major source of jaggedness is that LLMs do not remember new tasks and learn from them in a permanent way. A lot of AI companies are pursuing solutions to this issue, but it may be that this problem is harder to solve than researchers expect. Without memory, AIs will struggle to do many tasks humans can do, even while being superhuman in other areas. Colin Fraser drew two examples of what this sort of AI-human overlap might look like. You can see how AI is indeed superhuman in some areas, but in others it is either far below human level or not overlapping at all. ...