Ethan Mollick identifies a quiet revolution that is about to upend how we work, learn, and think: we are no longer waiting for AI to become powerful, we are entering an era where that power is becoming as ubiquitous and cheap as a Google search. This piece is notable not for predicting a future breakthrough, but for documenting a sudden, economic collapse in the cost of intelligence that makes the technology accessible to over a billion people overnight.
The Economics of Abundance
Mollick argues that the primary barriers to advanced AI have always been confusion and cost, both of which are now dissolving. He writes, "Until recently, free users of these systems (the overwhelming majority) had access only to older, smaller AI models that frequently made mistakes and had limited use for complex work." The author's analysis of the recent rollout of new model families highlights a critical shift in user experience. Previously, users had to navigate complex menus to select specific "reasoner" models capable of solving hard problems, a task that even power users failed at. "According to OpenAI, less than 7% of paying customers selected o3 on a regular basis, meaning even power users were missing out on what Reasoners could do," Mollick notes.
The core of the argument is that the industry is moving toward an "auto mode" where the system itself decides how much computing power to apply to a problem. This is designed to democratize access, but Mollick is candid about the initial friction: "The result is that one person using GPT-5 got a very smart answer while another got a bad one." Despite these growing pains, the data suggests the strategy is working. The percentage of paying customers using advanced reasoning models jumped from 7% to 24% in just days, and free users are finally getting access to tools that were previously locked behind paywalls.
Powerful AI is cheap enough to give away, easy enough that you don't need a manual, and capable enough to outperform humans at a range of intellectual tasks.
This economic shift is driven by a dramatic drop in the cost of computation. Mollick points out that while GPT-4 cost around $50 to process a million tokens, the newer, more capable models now cost mere cents. This isn't just a financial win; it's an environmental one. Google has reported a 33x improvement in energy efficiency per prompt in the last year alone. However, a counterargument worth considering is that while the marginal cost per prompt has collapsed, the sheer volume of usage could still lead to a net increase in energy consumption and water usage, a trade-off the author mentions but does not fully resolve.
The End of Prompt Engineering
Perhaps the most surprising claim in the piece is that the complex skills users have spent years mastering are becoming obsolete. For a long time, getting good results from AI required learning specific techniques like "chain-of-thought" prompting. Mollick challenges this orthodoxy directly: "In a recent series of experiments, however, we have discovered that these techniques don't really help anymore." The author suggests that modern models are now so capable that they can infer user intent without the need for rigid, technical instructions.
This is illustrated vividly with the release of new image generation tools. Mollick describes uploading an image of the Apollo 11 astronauts and a tuxedo, then simply asking the AI to "dress Neil Armstrong on the left in this tuxedo." The result was a realistic image with impressive details like fabric folds and a NASA pin on the lapel. The ease of use is staggering, but it introduces a new layer of complexity regarding truth and history. "A distortion of a famous moment in history made possible by AI," Mollick observes, noting that while the output is impressive, it represents a "potential warning about how weird things are going to get when these sorts of technologies are used widely."
The Chaos of Mass Intelligence
The article culminates in a sobering look at the societal implications of handing these tools to a billion people. Mollick coins the term "Mass Intelligence" to describe this new reality, where the scarcity of intelligence is gone. He writes, "Every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive." The author warns that these institutions are now ill-equipped to handle a flood of users who can generate high-quality text, code, and images instantly.
The dual-use nature of this technology is stark. Mollick notes, "Some people have intense relationships with AI models while other people are being saved from loneliness. AI models may be causing mental breakdowns and dangerous behavior for some while being used to diagnose the diseases of others." The challenge for society is no longer access, but governance and trust. "How do we rebuild trust when anyone can fabricate anything?" he asks, highlighting the urgent need to redefine expertise in a world where fabrication is trivial.
Bottom Line
Mollick's strongest contribution is his reframing of AI adoption from a technical race to an economic inevitability that has already arrived. The argument's biggest vulnerability is its optimism regarding the speed of institutional adaptation; while the technology is ready, the legal and ethical frameworks to manage "Mass Intelligence" are nowhere close. Readers should watch for how schools and courts attempt to enforce rules when the tools to bypass them are free and effortless.