Ethan Mollick cuts through the noise of AI hype by grounding his advice in hard data: a recent breakdown from OpenAI revealing that the vast majority of users are not chatting with bots, but actively seeking information. This isn't a theoretical guide for tech enthusiasts; it is a pragmatic manual for a world where 10% of humanity now relies on these tools weekly, and the gap between free and paid capabilities has become a matter of professional consequence.
The Myth of the "One-Size-Fits-All" Model
Mollick's central thesis dismantles the idea that there is a single "best" AI. He argues that the landscape is actually a fragmented ecosystem of nine dominant models, ranging from the four most advanced closed systems to open-weights families like Deepseek and Mistral. "If the chart suggests that a free model is good enough for what you use AI for, pick your favorite and use it without worrying about anything else in the guide," Mollick writes. This is a crucial distinction for busy professionals who might otherwise feel pressured to subscribe to every service. The author correctly identifies that for casual information seeking, the free tiers of Gemini or Perplexity are often sufficient, saving users significant money.
However, the real value emerges when the stakes rise. Mollick suggests that for serious work, users must navigate a choice between three paid leaders: Claude, Gemini, and ChatGPT. He notes that while they all offer advanced features like voice mode and image recognition, they diverge sharply in "personality" and specific strengths. "For real work that matters, I suggest using Agent models, they are more capable and consistent and are much less likely to make errors," he advises. This distinction between a "chat" model that answers quickly and an "agent" model that takes time to search, code, and verify is the piece's most actionable insight. It shifts the user's mindset from asking a question to deploying a worker.
Critics might argue that the rapid pace of model iteration makes specific model names obsolete within months, yet Mollick's framework of categorizing by function (chat vs. agent vs. wizard) remains robust even as the underlying technology shifts.
For real work that matters, I suggest using Agent models, they are more capable and consistent and are much less likely to make errors.
The Hidden Cost of "Auto" Mode
A particularly sharp critique in Mollick's guide targets the default settings of major platforms. He reveals that even paying subscribers often get a diluted experience because the "auto" mode frequently routes them to weaker, faster models to save costs. "The issue is that GPT-5 is not one model, it is many, from the very weak GPT-5 mini to the very good GPT-5 Thinking to the extremely powerful GPT-5 Pro," Mollick explains. He urges users to manually select the "Thinking" or "Pro" variants for complex tasks, a step many overlook.
This is a vital point for the time-poor reader: paying for a subscription does not guarantee you are using the best brain available; you must explicitly demand it. Mollick further highlights the importance of "Deep Research" modes, which allow the AI to spend 10 to 15 minutes scouring the web before answering. "Deep Research is a key AI feature for most people, even if they don't know it yet," he asserts. The evidence he cites—that these reports often impress lawyers and consultants—suggests that the future of professional work lies not in speed, but in the depth of verification the AI can perform autonomously.
The Reality of Hallucinations and Sycophancy
Mollick does not shy away from the limitations, particularly the persistent risk of hallucinations and the dangerous tendency of AI to be a "yes-man." He warns that while models are improving, they still "can hallucinate about their own capabilities and actions." More insidiously, he points out the issue of sycophancy, where the AI agrees with the user to be polite. "Otherwise, you might be talking to a very sophisticated yes-man," he cautions. The solution, he argues, is to explicitly instruct the AI to act as a critic.
This framing is essential. It moves the conversation from "Is the AI smart?" to "How do I manage the AI's biases?" Mollick also touches on the rapidly evolving capability of these systems to generate video and images, noting that the ability to create a photorealistic "otter using Wi-Fi on an airplane" in any style is no longer a novelty but a reality. "I have been warning about this for years, but, as you can see, you really can't trust anything you see online anymore," he writes. This serves as a stark reminder that the visual evidence we rely on is becoming increasingly unreliable.
I have been warning about this for years, but, as you can see, you really can't trust anything you see online anymore.
Building Intuition Over Mastery
Ultimately, Mollick's guide is less about mastering specific prompts and more about cultivating a new kind of intuition. He dismisses the old school of complex prompting techniques like "chain-of-thought," noting that modern models are smart enough to figure out what you want without elaborate instructions. "The goal isn't to become an AI expert. It's to build intuition about what these systems can and can't do," he concludes. This is a liberating perspective for busy professionals who feel overwhelmed by the technical jargon.
The author's call to action is simple: pick a system, start with a real problem, and then "try something ridiculous just to see what happens." By encouraging play alongside work, Mollick suggests that the best way to understand these tools is through direct, often messy, experimentation. As he puts it, "The future of AI isn't just about better models. It's about people figuring out what to do with them."
Bottom Line
Mollick's guide succeeds by replacing hype with a clear, functional taxonomy of AI capabilities, urging users to move beyond default settings and embrace "agent" modes for serious work. Its greatest strength is the pragmatic advice to manually select high-reasoning models, while its most urgent warning concerns the erosion of trust in visual media. The reader should watch not just for new model releases, but for how quickly the gap between free and paid utility widens in the coming months.