Ethan Mollick challenges the prevailing narrative that artificial intelligence will soon render human labor obsolete, arguing instead that AI's capabilities are defined by a "jagged frontier" where superhuman performance in one area coexists with baffling incompetence in another. This is not a story about a smooth curve of improvement, but a landscape of sudden lurches and stubborn bottlenecks that will dictate the future of work more than raw processing power ever will.
The Jagged Reality of Capability
Mollick begins by dismantling the intuition that AI difficulty maps to human difficulty. He notes that in the "ancient AI days of 2023," he and his colleagues coined the term "Jagged Frontier" to describe how these systems can be "superhuman at differential medical diagnosis or good at very hard math... and yet still be bad at relatively simple visual puzzles or running a vending machine." This framing is crucial because it explains why the technology feels so unpredictable to users. The system is not merely "dumber" than a human; it is operating on a completely different, often illogical, axis of competence.
While some futurists like Tomas Pueyo envision a future where the AI frontier simply outpaces human limitations, Mollick pushes back. He suggests that the jaggedness might be permanent, creating a scenario where "we get supersmart AIs which never quite fully overlap with human tasks." This is a sobering correction to the hype cycle. If the technology cannot learn from new tasks permanently—a key weakness he identifies—then the promise of a fully autonomous agent remains distant. The argument gains depth when viewed through the lens of historical "reverse salients," a concept from historian Thomas Hughes describing how progress often stalls on a single technical problem holding back an entire system. Just as early electrical grids were held back by a lack of efficient transmission, AI is currently held back by specific, non-obvious weaknesses.
"Jaggedness creates bottlenecks, and bottlenecks mean that even very smart AI cannot easily substitute for humans."
The Migration of Bottlenecks
The most compelling part of Mollick's analysis is the distinction between technical bottlenecks and institutional ones. He argues that "a system is only as functional as its worst components," and currently, those components are often things we don't think of as "intelligence." For instance, even if an AI can identify drug candidates faster than any human, the process is stalled by the need for "actual human patients who take actual time to recruit, dose, and monitor." Here, the bottleneck migrates from intelligence to institutions, and as Mollick rightly observes, "institutions move at institution speed."
This is a vital insight for busy leaders who might expect immediate ROI from AI integration. The technology may be ready, but the surrounding ecosystem is not. Mollick illustrates this with a study where an AI reproduced twelve years of medical review work in two days, yet still required human intervention for "edge cases" like accessing supplementary files or emailing authors. The AI handled the heavy intellectual lifting but failed at the mundane, human-centric tasks that make up less than 1% of the work yet prevent full automation. Critics might note that this 1% gap could close quickly with better prompting or multimodal capabilities, but Mollick's point stands: the last mile of automation is often the hardest.
The Power of the Lurch
Despite these limitations, Mollick predicts that progress will not be linear but will come in "lurches" when a specific reverse salient is finally overcome. He points to Google's recent image generation advances as a prime example. For years, the inability to generate coherent images was a bottleneck that prevented AI from creating useful visual presentations. Once that specific weakness was addressed, the entire system jumped forward. Suddenly, AI could generate slide decks that were not just text-based code, but visually flexible, style-aware documents.
"When one breaks, everything behind it comes flooding through."
This dynamic suggests that the next wave of disruption won't come from general intelligence, but from solving a specific, narrow problem that unlocks a flood of new applications. Whether it is memory, real-time learning, or physical interaction, the moment a lab fixes a reverse salient, the landscape changes overnight. However, this also means that for every bottleneck removed, the jagged frontier shifts, likely creating new edges where human expertise remains essential. The jobs of consultants and designers, for example, rely on "unwritten rules" and "buy-in" from parties involved—tasks that remain firmly outside the current AI frontier.
Bottom Line
Mollick's strongest contribution is shifting the focus from benchmark scores to the specific bottlenecks that actually constrain real-world utility. His argument that institutions, not just algorithms, will determine the pace of adoption is a necessary reality check for the industry. The biggest vulnerability in this view is the assumption that certain human-centric tasks will remain permanent reverse salients; history shows that what seems impossible today often becomes trivial tomorrow. Readers should watch not for when AI becomes generally smarter, but for which specific bottleneck breaks next, as that will signal the next sudden lurch in capability.