← Back to Library

Everyone You Know Is About to Try Claude (I Showed 3 People for 5 Minutes — All 3 Switched)

Nate B Jones

The Pitch

If you haven't heard of Claude yet, you will soon. Anthropic refused a Pentagon contract. The White House responded. And suddenly, millions of Americans who couldn't name Anthropic two weeks ago are downloading this new app. But here's the problem: almost everyone will treat Claude as a drop-in replacement for ChatGPT. Same problems. Same expectations. Same workflow. That's not how AI works.

This piece isn't a feature tour. It's a practical guide to what Claude does differently — grounded in what we can all verify, not marketing claims from either company. The author tested this extensively with three people who had never used Claude. Within five minutes of seeing these differences, all three switched permanently.

How They Differ

The differences aren't cosmetic. ChatGPT and Claude were built with very different training approaches and philosophies — and those approaches produce measurably different behavior.

ChatGPT's default behavior tends toward being more agreeable, more expansive, and at least in some cases, more warm. If you ask it a question, you often get a thorough answer plus context you didn't request, plus an offer to elaborate. OpenAI has worked hard to rein in the worst excesses of this pattern, but the general orientation to be helpful, to be thorough, to keep the conversation going — that remains the default.

Claude was built using an approach called constitutional AI, where the model is trained against explicit principles: be helpful, be honest, avoid harm — rather than purely optimizing for what feels like a good response. The practical effect is that Claude is more likely to flag a problem than to smooth it over. It's more likely to ask what you're really trying to achieve here than to rush to produce something plausible. It tends toward conciseness rather than padding.

"Claude is somewhat more likely to flag a concern, to question your framing, to tell you something you didn't ask to hear."

This doesn't mean one tool is better across the board. It means they have very different strengths — and using Claude well requires understanding how those strengths differ and how to activate them.

Principle One: Flagging Plan Holes

The first practical difference shows up when you're working with a plan that has a hole in it. ChatGPT has a documented tendency toward sycophancy — telling you what you want to hear rather than what you need to hear. OpenAI's own researchers have acknowledged this, most visibly when a GPT-4o update in April made the problem so extreme they had to roll it back within a few days.

Since then, OpenAI has put serious work into fixing it. But that underlying tendency hasn't fully disappeared because it's rooted in the training approach — models trained heavily on user feedback inherently reward responses that feel satisfying in the moment.

Claude's constitutional AI trains against explicit principles like honesty. The practical difference: Claude is somewhat more likely to flag a concern, to question your framing, to tell you something you didn't ask to hear. It's not dramatically more likely, but it happens enough that you'll notice it over real use.

This matters because the most expensive AI mistakes aren't factual errors these days. They're plans that should never have been executed — hiring plans with timelines assuming engineers ramp in three months when the real number is six, pricing strategies that ignore competitive responses. Claude is more likely to flag these kinds of issues than ChatGPT right now. Not always. Not infallibly. But the difference between slightly more likely to push back compounds over time.

Your plans get stress tested more often. You make fewer expensive mistakes — but you have to be okay with having your ego pushed back on.

Principle Two: Describe Your Situation

When you're using Claude, describe your situation rather than your desired output. In ChatGPT, people will often write a prompt like a command: "Write a cover letter. Give me five ideas." Claude will respond to this just fine, but it responds to situations noticeably better — and the difference in output quality is worth calling out.

Claude was trained via constitutional AI to reason about whether a request is well-framed. ChatGPT was trained via reinforcement learning with human feedback to satisfy the request as stated. A model trained to evaluate framing will do more with a well-framed input.

"If you give it a thin situation, you're going to get thin thinking. If you give it a really rich context layer, you're going to get strategic reasoning that changes how you approach the problem."

Claude tends to ask more clarifying questions and engages more deeply with context than ChatGPT. ChatGPT tends to use additional context to produce a more detailed version of exactly what you asked for. Claude tends to use it to think about how you framed the task.

Before you tell Claude what to make, spend a couple sentences on what you're dealing with. Claude will appreciate it — and so will you.

Principle Three: Give Claude Your Work

This is counterintuitive for people who think AI is for generating content. Claude is better at editing and refining existing work. You can get Claude to generate work from nothing, but it's more concise and you have to be very specific in your ask if you're asking it to generate work.

In a blind test conducted in February with over a hundred voters per round across eight prompts, Claude won four of eight rounds while ChatGPT won one. This was from Access Intelligence's independent comparison of different models. Users consistently rated Claude's outputs as more natural and publishable without tons of editing — 85% on structural coherence versus ChatGPT's 78%.

"Claude tends to write more naturally and ChatGPT sounds more generic."

Type.ai's analysis documented that ChatGPT tends to fall into a very distinctive AI voice, while Claude's outputs read a little bit more like human writing. If you are structurally editing — not just grammar fixes, but someone telling you the third paragraph undermines the first or you buried your strongest point — Claude does really good work at that level.

ChatGPT tends to polish at the individual sentence level. Run the same document through both with the instruction "what is the weakest argument here and how could you fix it?" You tend to find that Claude has an eye for prose structure in a way ChatGPT is a little bit weaker.

One caveat: more context is helpful for any model. The difference is what each model does with that richness.

Principle Four: Show Its Reasoning

It is okay to ask Claude to show its reasoning. Claude has a capability called extended thinking — the model allocates additional processing to work through complicated problems step by step before answering and then shows you the chain of reasoning it followed.

On genuinely hard problems like contract analysis or debugging intermittent failures, extended thinking produces meaningfully better output. Anthropic reports up to a 54% improvement on hard reasoning tasks.

"You can see the chain of thought in Claude's writing and change or arrest it over time."

Part of how this works is that Claude shows its reasoning as it works through the problem — burning extra tokens and reading that reasoning to continue solving. This really matters because you can see the chain of thought and change or arrest it over time.

In co-work, you can actually send a message to Claude and change how the agent will respond before the agent finishes the task. In regular Claude chat, you can't do that yet, but you can see the response. If you don't like where the chain of thought is going — because you can open it up and look at it as it produces — you can just hit stop and send a new message and clarify.

This really changes how you think about approaching hard problems. You have to ask yourself: Is this problem something I'm going to need to intervene on? And if I need to intervene, have I set myself up to focus on the running text that will be coming through that chat window as Claude is thinking about it — given myself the focus to catch when that goes off the rails and say "No, no, no, not that"?

Anyone who uses Claude a lot will tell you they often are doing that unconsciously. They're so used to working with Claude and the way it produces text to solve problems that they're just kind of keeping an eye on it. ChatGPT users work differently — they're used to just hitting go and then waiting for the response.

There's a little bit more steering along the way you get with Claude.

Principle Five: Build a Workspace

You're building a workspace, not a chat box. Both Claude and ChatGPT have projects for work. Both let you upload documents, both set persistent instructions, and both organize conversations around domains. So that concept is super familiar.

The way most people use projects though is incorrect. They treat them like filing cabinets — stick in docs, send a vague instruction like "help me with marketing." And then get conversations barely different from not using the project.

Here's how to use projects correctly: your project's custom instructions should be operating rules for every conversation in that workspace. Not "help me with marketing," but:

"I'm a product marketing manager at a B2B SaaS company in cybersecurity. My team sells to CISOs and IT directors at mid-market companies which have 500 to 2,000 employees. Our biggest differentiator is ease of deployment and my VP prefers databacked arguments and dislikes jargon. All content should align with the positioning doc which I've uploaded and the brand voice guide also uploaded."

Now every single conversation in the project inherits that context. You don't reexplain your role, your audience, etc. You just say "I need a one-pager for the Gartner meeting" — and Claude already knows what that means.

Why does Claude specifically reward this? Claude tends to follow complex system-level instructions very consistently across conversations without much drift. So when you set detailed operating rules like that in a Claude project, they tend to stick.

Pixel Peak's 500-task comparison measured instruction compliance directly. Claude hit 94% exact compliance versus ChatGPT's 87%.

Critics might note that the user satisfaction metrics still favor ChatGPT for general use cases — and for tasks where users want quick generated output without heavy context, ChatGPT remains the better choice. The constitutional AI approach sometimes produces outputs that feel less immediately satisfying even when they're more accurate.

Bottom Line

The strongest part of this argument is the practical, testable framework: five principles grounded in actual comparisons rather than marketing claims. Jones has a point about the training philosophy differences between constitutional AI and reinforcement learning from human feedback — those produce measurably different behaviors worth understanding.

The biggest vulnerability: ChatGPT's user satisfaction scores remain higher for general use cases where users want quick generated output without heavy context. The piece slightly overclaims what Claude can do on length and voice consistency without fully acknowledging that trade-off.

What to watch next: as more people try Claude, the five principles here will become standard guidance — but the biggest value may be in understanding how training philosophy actually produces different outputs, not just which tool is "better."

Everyone you know is about to ask you about Claude and here's how you can actually help them. You already know the backstory. Anthropic told the Pentagon no. The White House retaliated and the public responded by making Claude the number one app in America.

Millions of people who had never heard of Anthropic a couple of weeks ago have just downloaded this new app. And here's the problem. Almost all of them are going to treat Claude as a drop-in replacement for chat GPT. same problems, same expectations, same workflow.

And that's not how AI works. AI models are not interchangeable brands like Coke and Pepsi. They're built differently, trained differently, and optimized for different things. Switching from Chat GPT to Claude with the exact same habits, it's like switching from Excel to Photoshop and wondering why the spreadsheet features are missing.

Look, they're both software. Yes, they're LLMs, but at this point, they've diverged so much that you really can't call them the same tool. People who open up Claude and type their usual chat GPT prompts and get back kind of unremarkable answers, are not going to understand what they're missing, and they're probably going to walk away in frustration when they realize things that they've taken for granted in chat GPT, like image generation, just aren't there in Clawude. This is the guide for the conversation you're going to have when your friends say, "Hey, what is this Claude thing?" It's not a feature tour.

It's a practical explanation of what Claude does differently than chat GPT. How you can use those differences and what changes about your work when you do. And it's all grounded in what we can all verify. It's not marketing claims from either company.

So, what's actually different? The differences are not cosmetic. Claude and Chad GPT were built with very different training approaches and different philosophies. And those approaches produce measurably different behavior.

Chad GPT's default behavior tends toward being more agreeable, more expansive, and at least in some personalities, more warm. If you ask it a question, you often get a thorough answer plus context you didn't request, plus an offer to elaborate. Now, OpenAI has worked hard to rein in the worst excesses of this pattern, but the general orientation to be helpful, to be thorough, to keep the conversation going, that remains the default. And for hundreds ...