← Back to Library

Mass intelligence

Ethan Mollick identifies a quiet revolution that is about to upend how we work, learn, and think: we are no longer waiting for AI to become powerful, we are entering an era where that power is becoming as ubiquitous and cheap as a Google search. This piece is notable not for predicting a future breakthrough, but for documenting a sudden, economic collapse in the cost of intelligence that makes the technology accessible to over a billion people overnight.

The Economics of Abundance

Mollick argues that the primary barriers to advanced AI have always been confusion and cost, both of which are now dissolving. He writes, "Until recently, free users of these systems (the overwhelming majority) had access only to older, smaller AI models that frequently made mistakes and had limited use for complex work." The author's analysis of the recent rollout of new model families highlights a critical shift in user experience. Previously, users had to navigate complex menus to select specific "reasoner" models capable of solving hard problems, a task that even power users failed at. "According to OpenAI, less than 7% of paying customers selected o3 on a regular basis, meaning even power users were missing out on what Reasoners could do," Mollick notes.

Mass intelligence

The core of the argument is that the industry is moving toward an "auto mode" where the system itself decides how much computing power to apply to a problem. This is designed to democratize access, but Mollick is candid about the initial friction: "The result is that one person using GPT-5 got a very smart answer while another got a bad one." Despite these growing pains, the data suggests the strategy is working. The percentage of paying customers using advanced reasoning models jumped from 7% to 24% in just days, and free users are finally getting access to tools that were previously locked behind paywalls.

Powerful AI is cheap enough to give away, easy enough that you don't need a manual, and capable enough to outperform humans at a range of intellectual tasks.

This economic shift is driven by a dramatic drop in the cost of computation. Mollick points out that while GPT-4 cost around $50 to process a million tokens, the newer, more capable models now cost mere cents. This isn't just a financial win; it's an environmental one. Google has reported a 33x improvement in energy efficiency per prompt in the last year alone. However, a counterargument worth considering is that while the marginal cost per prompt has collapsed, the sheer volume of usage could still lead to a net increase in energy consumption and water usage, a trade-off the author mentions but does not fully resolve.

The End of Prompt Engineering

Perhaps the most surprising claim in the piece is that the complex skills users have spent years mastering are becoming obsolete. For a long time, getting good results from AI required learning specific techniques like "chain-of-thought" prompting. Mollick challenges this orthodoxy directly: "In a recent series of experiments, however, we have discovered that these techniques don't really help anymore." The author suggests that modern models are now so capable that they can infer user intent without the need for rigid, technical instructions.

This is illustrated vividly with the release of new image generation tools. Mollick describes uploading an image of the Apollo 11 astronauts and a tuxedo, then simply asking the AI to "dress Neil Armstrong on the left in this tuxedo." The result was a realistic image with impressive details like fabric folds and a NASA pin on the lapel. The ease of use is staggering, but it introduces a new layer of complexity regarding truth and history. "A distortion of a famous moment in history made possible by AI," Mollick observes, noting that while the output is impressive, it represents a "potential warning about how weird things are going to get when these sorts of technologies are used widely."

The Chaos of Mass Intelligence

The article culminates in a sobering look at the societal implications of handing these tools to a billion people. Mollick coins the term "Mass Intelligence" to describe this new reality, where the scarcity of intelligence is gone. He writes, "Every institution we have — schools, hospitals, courts, companies, governments — was built for a world where intelligence was scarce and expensive." The author warns that these institutions are now ill-equipped to handle a flood of users who can generate high-quality text, code, and images instantly.

The dual-use nature of this technology is stark. Mollick notes, "Some people have intense relationships with AI models while other people are being saved from loneliness. AI models may be causing mental breakdowns and dangerous behavior for some while being used to diagnose the diseases of others." The challenge for society is no longer access, but governance and trust. "How do we rebuild trust when anyone can fabricate anything?" he asks, highlighting the urgent need to redefine expertise in a world where fabrication is trivial.

Bottom Line

Mollick's strongest contribution is his reframing of AI adoption from a technical race to an economic inevitability that has already arrived. The argument's biggest vulnerability is its optimism regarding the speed of institutional adaptation; while the technology is ready, the legal and ethical frameworks to manage "Mass Intelligence" are nowhere close. Readers should watch for how schools and courts attempt to enforce rules when the tools to bypass them are free and effortless.

Sources

Mass intelligence

by Ethan Mollick · One Useful Thing · Read full article

More than a billion people use AI chatbots regularly. ChatGPT has over 700 million weekly users. Gemini and other leading AIs add hundreds of millions more. In my posts, I often focus on the advances that AI is making (for example, in the past few weeks, both OpenAI and Google AIs chatbots got gold medals in the International Math Olympiad), but that obscures a broader shift that's been building: we're entering an era of Mass Intelligence, where powerful AI is becoming as accessible as a Google search.

Until recently, free users of these systems (the overwhelming majority) had access only to older, smaller AI models that frequently made mistakes and had limited use for complex work. The best models, like Reasoners that can solve very hard problems and hallucinate much less often, required paying somewhere between $20 and $200 a month. And even then, you needed to know which model to pick and how to prompt it properly. But the economics and interfaces are changing rapidly, with fairly large consequences for how all of us work, learn, and think.

Powerful AI is Getting Cheaper and Easier to Access.

There have been two barriers to accessing powerful AI for most users. The first was confusion. Few people knew to select an AI model. Even fewer knew that picking o3 from a menu in ChatGPT would get them access to an excellent Reasoner AI model, while picking 4o (which seems like a higher number) would give them something far less capable. According to OpenAI, less than 7% of paying customers selected o3 on a regular basis, meaning even power users were missing out on what Reasoners could do.

Another factor was cost. Because the best models are expensive, free users were often not given access to them, or else given very limited access. Google led the way in giving some free access to its best models, but OpenAI stated that almost none of its free customers had regular access to reasoning models prior to the launch of GPT-5.

GPT-5 was supposed to solve both of these problems, which is partially why its debut was so messy and confusing. GPT-5 is actually two things. It was the overall name for a family of quite different models, from the weaker GPT-5 Nano to the powerful GPT-5 Pro. It was also the name given to the tool that picked which model to use and how ...