← Back to Library

Which AI to use now: An updated opinionated guide

In a field where obsolescence happens weekly, Ethan Mollick cuts through the noise with a brutally practical truth: the "perfect" AI doesn't exist yet, and waiting for it is a strategic error. This isn't a speculative look at the future of artificial general intelligence; it is a tactical manual for the present moment, updated so recently that the author admits the landscape shifted while he was writing. For busy professionals, the value here isn't just a list of tools, but a framework for understanding why the current market is fragmented and how to navigate the confusing trade-offs between speed, reasoning, and privacy.

The Cost of Frontier Access

Mollick immediately dispels the myth that the most capable tools are free. He writes, "Right now, to consistently access a frontier model with a good app, you are going to need to pay around $20/month (at least in the US), with a couple exceptions." This is a crucial distinction for decision-makers who might be relying on free tiers for critical workflows. The author explains that companies intentionally push users toward smaller, cheaper-to-run models unless a subscription is purchased, creating a tiered system where capability is gated behind a paywall. He notes that while smaller models are faster, they are "far more capable than older versions" only when compared to their own predecessors, not the current frontier models. The argument holds up under scrutiny: the performance gap between a free, smaller model and a paid, frontier model is often the difference between a helpful assistant and a frustrating one.

The secret isn't waiting for the perfect AI - it's diving in and discovering what these tools can actually accomplish.

The Rise of Reasoning and Live Interaction

The piece identifies two specific technological leaps that define the current era: "reasoning" models and "Live Mode." Mollick describes reasoning models not as chatty assistants, but as "scholars" that "think" before answering, often taking minutes to process complex queries in math or code. He points out that the most capable of these, the o1 family from OpenAI, are confusingly named but essential for high-stakes problem solving. "The longer the model thinks, generally, the better the outcome," he writes, highlighting a fundamental shift in how we interact with machines. This reframing is vital; it suggests that for deep work, latency is a feature, not a bug.

Which AI to use now: An updated opinionated guide

Simultaneously, the author champions the "Live Mode" capabilities, particularly in ChatGPT, where the AI can see and hear in real-time. He describes this as a "seamless combination" of multimodal speech, vision, and internet connectivity that creates an interaction "like chatting with a knowledgeable (if not always 100% accurate) friend." While this is a powerful selling point, a counterargument worth considering is that this level of integration raises significant privacy concerns, even as providers offer opt-out modes. Mollick acknowledges that for truly sensitive data, enterprise versions are still necessary, but the casual user might underestimate the data footprint of a live, video-enabled session.

Navigating the Ecosystem: Capabilities vs. Vibes

Mollick's guide is refreshingly opinionated about the "vibes" of different models, arguing that personality matters as much as raw power. He notes that Claude, despite having fewer features, "often seems to be clever and insightful in ways that the other models are not," leading many to adopt it as a primary tool. Conversely, he critiques the confusion surrounding Microsoft's Copilot, noting the "lack of transparency over which models it is using when." This transparency issue is a significant blind spot for corporate users who need to know exactly which algorithm is processing their data.

The author also highlights the surprising emergence of DeepSeek, a Chinese model that is "remarkably capable (and free)." He writes, "The fact that it is a Chinese model is interesting in many ways, including the fact that this is the first non-US model to reach near the top of the AI ranking leaderboards." This observation underscores a shifting geopolitical dynamic in AI development, where the US monopoly on frontier models is fracturing. However, the reliance on a Chinese model for enterprise use introduces its own set of regulatory and security complexities that the guide mentions only in passing.

Every major provider (except DeepSeek) now offers some form of privacy-focused mode: ChatGPT lets you opt out of training, and Claude says it will not train on your data as does Gemini.

Bottom Line

Ethan Mollick's strongest contribution is his insistence that users stop waiting for a singular, perfect solution and instead curate a toolkit based on specific needs. The piece's greatest vulnerability is the sheer velocity of change; a guide written today may be partially outdated by next month, a reality the author admits with refreshing candor. The reader should watch for the next wave of "reasoning" models, as this is the area where the gap between human and machine capability is narrowing most rapidly.

Sources

Which AI to use now: An updated opinionated guide

by Ethan Mollick · One Useful Thing · Read full article

Please note that I updated this guide on 2/15, less than a month after writing it - a lot has changed in a short time.

While my last post explored the race for Artificial General Intelligence – a topic recently thrust into headlines by Apollo Program-scale funding commitments to building new AIs – today I'm tackling the one question I get asked most: what AI should you actually use? Not five years from now. Not in some hypothetical future. Today.

Every six months or so, I have written an opinionated guide for individual users of AI, not specializing in any one type of use, but as a general overview. Writing this is getting more challenging. AI models are gaining capabilities at an increasingly rapid rate, new companies are releasing new models, and nothing is well documented or well understood. In fact, in the few days I have been working on this draft, I had to add an entirely new model and update the chart below multiple times due to new releases. As a result, I may get something wrong, or you may disagree with my answers, but that is why I consider it an opinionated guide (though as a reminder, I take no money from AI labs, so it is my opinion!)

A Tour of Capabilities.

To pick an AI model for you, you need to know what they can do. I decided to focus here on the major AI companies that offer easy-to-use apps that you can run on your phone, and which allow you to access their most up-to-date AI models. Right now, to consistently access a frontier model with a good app, you are going to need to pay around $20/month (at least in the US), with a couple exceptions. Yes, there are free tiers, but you'll generally want paid access to get the most capable versions of these models.

We are going to go through things in detail, but, for most people, there are three good choices right now: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. There are also a trio of models that might make sense for specialized users: Grok by Elon Musk’s X.ai is an excellent model that is most useful if you are a big X user; Microsoft’s Copilot offers many of the features of ChatGPT and is accessible to users through Windows; and DeepSeek r1, a Chinese model that is remarkably ...