← Back to Library

Using AI right now: A quick guide

Ethan Mollick cuts through the noise of the AI arms race with a counterintuitive thesis: the specific model matters less than the ecosystem surrounding it. In a field obsessed with benchmark scores, Mollick argues that for the serious user, the battle has shifted from raw intelligence to system integration, a distinction that changes how we should actually spend our time and money.

The Ecosystem Over the Engine

Mollick's most striking observation is that the landscape has stabilized around three dominant players, rendering the endless comparison of isolated models somewhat obsolete. "Increasingly, it isn't about the best model, it is about the best overall system for most people," he writes. This reframing is crucial for busy professionals who might otherwise get paralyzed by the choice between a marginally faster inference engine or a slightly larger context window. The author suggests that the real value lies in the suite of tools—voice, vision, code execution, and research—that wrap around the core intelligence.

Using AI right now: A quick guide

He identifies the "big three" as Claude from Anthropic, Google's Gemini, and OpenAI's ChatGPT, noting that while specialized tools exist, they often lack the cohesion of these general-purpose platforms. "You can't go wrong with any of them," Mollick asserts, a statement that might sound lazy to tech enthusiasts but is actually a pragmatic admission of market maturity. The argument holds weight because it acknowledges that for 90% of use cases, the marginal gain of a niche tool is outweighed by the friction of switching contexts. However, critics might note that this consolidation risks creating a walled garden effect, where users become dependent on the specific feature sets of just three corporations, potentially stifling the diversity of innovation seen in the open-source community.

The difference between casual users and power users isn't prompting skill; it's knowing these features exist and using them on real work.

Navigating the Tiers of Intelligence

Once a platform is chosen, Mollick guides the reader through the often-confusing hierarchy of models within each system. He uses a compelling automotive analogy to explain the trade-offs: "Think of it like choosing between a sports car and a pickup truck; both are vehicles, but you'd use them for very different tasks." This is a vital distinction that many users miss, defaulting to the "fast" model for everything simply because it is the default setting.

The author details the three tiers: the fast models for casual chat, the powerful models for serious work like analysis and coding, and the ultra-powerful models that may take twenty minutes to think through a problem. "For anything high stakes (analysis, writing, research, coding) usually switch to the powerful model," he advises. This is where the historical context of Large Language Models becomes relevant; early iterations of these systems struggled significantly with complex reasoning, but the current generation, as Mollick notes, requires a deliberate shift in user behavior to unlock their potential. He warns that free versions often gatekeep these powerful models, a friction point that forces a choice between convenience and capability.

Privacy concerns also factor into this tiered approach. Mollick points out that "Claude does not train future AI models on your data, but Gemini and ChatGPT might, if you are not using a corporate or educational version." This is a critical differentiator for professionals handling sensitive information. While he notes that settings can be adjusted to prevent data usage in other systems, the default posture of these platforms remains a point of tension between utility and privacy.

The Rise of Deep Research and Multimodality

Perhaps the most transformative section of the piece focuses on "Deep Research," a feature that moves AI from a chatbot to an analytical engine. Mollick writes that these tools "can produce very high-quality reports that often impress information professionals (lawyers, accountants, consultants, market researchers)." This is not merely a feature update; it represents a shift in how information retrieval works, echoing the evolution seen in the transition from simple keyword search to semantic understanding in the Information Retrieval deep dive.

He suggests practical applications ranging from gift guides to second opinions in medicine, though he wisely adds the caveat that users should "trust your doctor/lawyer above AI." The accuracy of these reports, he notes, is significantly higher than standard queries, with citations that tend to be correct. This reliability is a game-changer for busy readers who need to synthesize large amounts of data quickly. Yet, the author is careful to temper expectations, reminding us that "Deep Research reports are not error-free but are far more accurate than just asking the AI for something."

Mollick also champions the underutilized power of voice mode, particularly for its multimodal capabilities. "The AI sees what you see and responds in real-time," he explains, describing scenarios like identifying plants on a hike or solving a math problem while cooking. This moves the interaction from a text-based exchange to a shared visual context, a capability that feels genuinely futuristic. However, he notes a significant limitation: voice mode often defaults to the less powerful models optimized for conversation, meaning users might miss the depth of the "ultra-powerful" models when speaking to the AI.

The End of the Prompting Obsession

One of the most liberating arguments Mollick makes is that the era of complex, rigid prompting is ending. "It used to be that the details of your prompts mattered a lot, but the most recent AI models I suggested can often figure out what you want without the need for complex prompts," he writes. This challenges a massive industry of prompt engineering courses and tips that have proliferated in the last year.

He cites research from the Generative AI Lab at Wharton to support this, noting that "being polite to AI doesn't seem to make a big difference in output quality overall." While he adds a footnote that politeness can have unpredictable effects on hard math questions, the core message is clear: stop obsessing over the exact phrasing and start focusing on context. "Give the AI context to work with," he urges, suggesting that uploading documents or providing background information is far more effective than crafting a perfect prompt. This shifts the burden of work from the user's linguistic dexterity to their ability to curate and provide relevant data.

The author also introduces the concept of "branching," where users can edit a prompt after receiving an answer to explore alternative paths. This feature, available across the major platforms, allows for a more dynamic and iterative workflow. "You can move between branches by using the arrows that appear after you have edited an answer," Mollick explains. This turns the AI interaction into a true dialogue rather than a single-shot query, allowing users to refine their thinking alongside the machine.

The risk of hallucination is why I always recommend using AI for topics you understand until you have a sense for their capabilities and issues.

Bottom Line

Mollick's guide is a necessary corrective to the hype cycle, grounding the discussion of AI in practical, actionable advice for the professional user. His strongest argument is that the technology has matured enough that the interface and ecosystem matter more than the underlying model's benchmark scores. The piece's biggest vulnerability is its reliance on paid subscriptions to access the full suite of capabilities, which may limit its applicability for those without the budget. For the busy reader, the takeaway is clear: stop treating AI like a search engine, pay for the tool, and start using its advanced features to do the heavy lifting of real work.

Deep Dives

Explore these related deep dives:

  • Large language model

    The article discusses choosing between AI systems like Claude, Gemini, and ChatGPT without explaining the underlying technology. Understanding how LLMs work—transformer architecture, training processes, and why they hallucinate—would give readers crucial context for evaluating these tools.

  • Information retrieval

    Deep Research is presented as a key feature that impresses professionals, but readers may not understand how AI retrieval differs from traditional search. This topic explains the foundations of how systems find and synthesize information from large document collections.

Sources

Using AI right now: A quick guide

by Ethan Mollick · One Useful Thing · Read full article

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn't about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use.

For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. With all of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research. Some of these features are free, but you are generally going to need to pay $20/month to get access to the full set of features you need. I will try to give you some reasons to pick one model or another as we go along, but you can’t go wrong with any of them.

What about everyone else? I am not going to cover specialized AI tools (some people love Perplexity for search, Manus is a great agent, etc.) but there are a few other options for general purpose AI systems: Grok by Elon Musk’s xAI is good if you are a big X user, though the company has not been very transparent about how its AI operates. Microsoft’s Copilot offers many of the features of ChatGPT and is accessible to users through Windows, but it can be hard to control what models you are using and when. DeepSeek r1, a Chinese model, is very capable and free to use, but is missing a few features from the other companies and it is not clear that they will keep up in the long term. So, for most people, just stick with Gemini, Claude, or ChatGPT

Great! This was the shortest recommendation post yet! Except… picking a system is just the beginning. The real challenge is understanding how to use these increasingly complex tools effectively.

Now ...