Ethan Mollick doesn't just predict a future where AI writes code; he demonstrates that the most valuable human skill in this new era is the ability to manage it. In a striking experiment with executive MBA students, he reveals that management fundamentals—often dismissed as "soft skills"—are actually the hard currency of the AI age. This isn't about learning new prompts; it's about rediscovering how to delegate effectively when the workforce is infinite, cheap, and tireless.
The Four-Day Startup Sprint
Mollick details an experimental class at the University of Pennsylvania where executive MBA students, many of whom could not code, built functional startups in just four days. They utilized tools like Claude Code and Google Antigravity to generate prototypes, conduct market research, and build financial models. The results were not merely theoretical; the teams produced working demos like "Ticket Passport" and "Revenue Resilience." Mollick notes, "I would estimate that what I saw in a couple of days was an order of magnitude further along the path to a real startup than I had seen out of students working over a full semester before AI."
The speed of this output is staggering, but the real insight lies in the agility it affords. Because the cost of failure dropped so drastically, students could pivot instantly. "By lowering the costs of pivoting, it was much easier to explore the possibilities without being locked in or even explore multiple startups at once: you just tell the AI what you want," Mollick writes. This suggests a fundamental shift in how innovation happens: the bottleneck is no longer technical execution, but the clarity of the vision.
Critics might argue that these four-day prototypes lack the rigorous stress-testing of real market conditions, and that the "working" code may be fragile. However, Mollick's point stands: the barrier to entry for starting has collapsed, changing the economics of experimentation entirely.
The skills that are so often dismissed as "soft" turned out to be the hard ones.
The Equation of Agentic Work
Moving beyond the classroom, Mollick introduces a mental model for deciding when to delegate to AI, which he calls the "Equation of Agentic Work." He breaks this down into three variables: the time it takes a human to do the task (Human Baseline Time), the likelihood the AI succeeds (Probability of Success), and the time required to prompt and evaluate the AI (AI Process Time). The decision to use AI hinges on a trade-off: "You're trading off 'doing the whole task' against 'paying the overhead cost,' possibly multiple times until you get something acceptable."
He cites recent data from the GDPval paper, noting that with advanced models like GPT-5.2, the balance has shifted. "GPT-5.2 Thinking and Pro models tied or beat human experts an average of 72% of the time," Mollick explains. This high success rate makes the overhead of evaluation worthwhile even for complex tasks. If a task takes a human seven hours, and the AI succeeds 72% of the time with an hour of review, the net time saved is significant. The argument is compelling because it moves past the hype of "AI doing everything" to a practical calculus of efficiency.
However, this equation relies heavily on the user's ability to accurately judge the "Probability of Success." If a professional cannot distinguish between a plausible-looking failure and a genuine success, the time saved evaporates. As Mollick admits, "The hardest cases are plausible-looking failures," which requires a level of expertise that not everyone possesses.
Management as the New Prompting
The core of Mollick's argument is that the future of work is not about being a better prompter, but a better manager. He observes that the most effective way to guide AI is to use the same documentation frameworks humans have used for centuries: Product Requirements Documents, shot lists, and Five Paragraph Orders. "All of these documents work remarkably well as AI prompts for this new world of agentic work," he writes. The logic is sound: these documents exist to transfer intent from one mind to another, a challenge that is identical whether the recipient is a junior employee or a large language model.
Mollick emphasizes that subject matter expertise is the critical differentiator. "It turns out that the key to success was actually the last bit of the previous paragraph: telling the AI what you want," he notes. His students succeeded not because they were AI experts, but because they knew how to scope problems and define deliverables in their specific fields. "What's scarce is knowing what to ask for," he concludes. This reframes the narrative from "AI will replace managers" to "AI will replace those who cannot manage."
A counterargument worth considering is that this view assumes a level of stability in business processes that may not exist. If the market changes faster than the ability to write clear requirements, the "management" approach could become a bottleneck. Yet, Mollick's evidence suggests that the ability to articulate a goal is becoming the primary constraint.
Now the "talent" is abundant and cheap. What's scarce is knowing what to ask for.
Bottom Line
Mollick's most powerful contribution is reframing management not as a bureaucratic necessity, but as the essential interface between human intent and machine capability. The argument's strength lies in its grounding in real-world experimentation rather than speculation, proving that delegation skills are the new competitive advantage. The biggest vulnerability remains the human element: if we cannot accurately evaluate the output, the abundance of cheap AI talent becomes a liability rather than a superpower.