Ethan Mollick doesn't just predict the future of work; he builds a prototype of it in real-time, demonstrating that the barrier to entry for complex creation has collapsed, but the barrier to quality has merely shifted. In his latest exploration for One Useful Thing, Mollick moves beyond the hype of "vibecoding" to reveal a stark reality: the era of the lone coder is ending, replaced by a new paradigm where human expertise is measured not by syntax memorization, but by the ability to guide, troubleshoot, and validate machine output. This is not a story about AI replacing humans; it is a story about the redistribution of cognitive labor, where the most valuable skill is no longer writing the code, but knowing when the code is wrong.
The Illusion of Effortless Creation
Mollick begins by testing the limits of Andrej Karpathy's concept of "vibecoding," a practice where users prompt an AI to build software using natural language. He sets a high bar for himself, attempting to build a 3D simulation game from scratch without knowing Linux or JavaScript. The results were immediate and startling. "The very first thing I typed into Claude Code was: 'make a 3D game where I can place buildings of various designs and then drive through the town i create,'" Mollick writes. "I got a working application... about four minutes later, with no further input from me."
The speed here is the hook, but the fragility is the lesson. When the game felt "a little boring," Mollick simply asked for a firetruck, traffic, and the ability for buildings to burn. The AI complied instantly. However, the illusion of total automation shattered when a bug halted progress. Mollick describes a twenty-minute debugging session where he had to act as a detective, feeding error messages to the AI until the system self-corrected. The cost? Around $13 in API fees. The takeaway is profound: "vibecoding is most useful when you actually have some knowledge and don't have to rely on the AI alone." This framing is crucial because it counters the narrative that AI will render technical skills obsolete. Instead, it suggests that technical skills are becoming more abstract, shifting from implementation to architecture and oversight.
Vibecoding isn't about eliminating expertise but redistributing it - from writing every line of code to knowing enough about systems to guide, troubleshoot, and evaluate.
Critics might argue that this reliance on human intervention for basic debugging slows down the promised efficiency gains. Yet, Mollick's experience suggests that without that human "vibe," the output remains superficial. The machine generates the structure; the human provides the intent and the correction.
The New Role of the Creative Director
The piece expands beyond coding to explore "Vibeworking with expertise" in content creation. Mollick tests Manus, an autonomous agent, by asking it to create an interactive course on elevator pitching. The AI produced a comprehensive, error-free syllabus in minutes. "You can instantly see that it was too text heavy and did not include opportunities for knowledge checks or interactive exercises," Mollick notes. With a single follow-up prompt, the agent restructured the entire course to include videos and quizzes.
This demonstrates a shift in the professional role of the expert. The human is no longer the primary producer of content but the editor-in-chief. Mollick argues that "you have to know what you want to create; be able to judge whether the results are good or bad; and give appropriate feedback." This is a compelling redefinition of value. In a world where generating text or code is cheap, the premium moves to curation and judgment. The AI can do the heavy lifting of assembly, but it lacks the "instincts" to know if the final product actually serves a human need.
Deep Vibeworking: When Stakes Are High
The most significant portion of the article addresses "Deep Vibeworking," where the margin for error is non-existent. Mollick applies this to his own academic research, using a decade-old dataset on crowdfunding that he had never fully analyzed. He leveraged AI to generate hypotheses, analyze data, and draft a research paper. The speed was staggering: "It took less than an hour to create, as compared to weeks of thinking, planning, writing, coding and iteration."
However, the human element remained the critical control valve. The AI proposed statistically valid approaches that were inappropriate for the specific dataset. "Together, we worked through the hypothesis to generate fairly robust findings," Mollick explains, highlighting that the human expert was needed to filter the AI's suggestions based on domain-specific nuance. "I never had to write a line of code, but only because I knew enough to check the results and confirm that everything made sense." This is the core of the argument: AI amplifies the reach of the expert, but it cannot replace the expert's intuition. Without that intuition, the output is merely a collection of plausible-sounding but potentially flawed assertions.
The AI is far from being able to work alone, humans still provide both vibe and work in the world of vibework.
A counterargument worth considering is whether this "collaboration" is sustainable at scale. If every researcher or developer must spend significant time verifying AI output, does the time savings truly materialize? Mollick acknowledges this tension, noting that the landscape is a "moving target" and that the tools are not yet reliable enough for full autonomy. The risk is that users without deep expertise may mistake the AI's confidence for correctness, leading to the proliferation of high-quality-looking but fundamentally broken work.
Bottom Line
Mollick's most valuable contribution is his refusal to romanticize the technology; he treats AI as a powerful but fallible junior partner that requires constant supervision. The strongest part of his argument is the identification of "minimum viable knowledge" as the new currency of productivity. The biggest vulnerability lies in the assumption that everyone has the capacity to develop this new form of expertise. As these tools become ubiquitous, the divide may not be between those who can code and those who cannot, but between those who can effectively direct AI and those who cannot distinguish its output from reality.