← Back to Library

Learn 90% of Claude code in 31 minutes

The Beginner's Treadmill

Chase's thirty-one-minute walkthrough of Claude Code lands squarely in the genre of YouTube developer tutorials that promise mastery through a single sitting. The title claims viewers will "learn 90% of Claude Code," and to the video's credit, it does cover a surprisingly broad surface area: installation, permissions, prompting strategies, skills, CLI integrations, context window management, and deployment. Whether any of that constitutes ninety percent of what matters is a separate question entirely.

The most valuable moment in the entire piece has nothing to do with Claude Code's feature set. It arrives when Chase confronts the uncomfortable reality that AI-assisted development creates a new kind of ignorance:

One of the things with AI that's great about it and is also kind of its downfall is the fact that it lets us play in spaces and domains we have no business being in.

This is an underappreciated tension in the vibe coding movement. The tools are so capable that they dissolve the feedback loop that traditionally forced developers to learn. A junior engineer who cannot get their code to compile is forced to understand why. A vibe coder whose app works on the first try may never interrogate the foundations beneath it.

Learn 90% of Claude code in 31 minutes

The Permissions Conversation Deserves More Scrutiny

Chase recommends running Claude Code with the --dangerously-skip-permissions flag, framing it as a speed optimization that power users gravitate toward. He acknowledges the risks in passing but ultimately waves them away:

I will say having used claude code for hundreds and hundreds of hours. I've never run into that issue. And most people, and this is from Anthropics data themselves, if they're power users, they're on bypass permissions on.

The counterpoint here is significant. Anthropic's own documentation treats this flag as genuinely dangerous, not as a convenience toggle. The fact that experienced users adopt it does not make it wise advice for the target audience of this video, which is explicitly people who "don't come from any sort of technical background." Telling a non-technical user to give an AI agent unrestricted access to their filesystem is the equivalent of handing someone car keys on their first day and suggesting they disable the seatbelt because race car drivers find it restrictive. The analogy is imperfect, but the risk asymmetry is real: a power user who accidentally loses files understands what happened and can recover. A beginner may not even know what was lost.

Plan Mode and the Art of Not Knowing What You Want

The tutorial's treatment of plan mode is genuinely useful. Chase correctly identifies that the default behavior of Claude Code, charging ahead and filling gaps with assumptions, produces mediocre results. Plan mode forces a dialogue, and dialogue surfaces requirements that the user did not know they had.

More interesting is the advice to prompt Claude Code with open-ended, expert-framing questions:

What would an expert in camb boards be thinking about or asking about here?

This technique, sometimes called "role-based prompting" or "expertise elicitation," is well-documented in the prompt engineering literature. It works because large language models have absorbed domain-specific reasoning patterns and can surface them when explicitly asked. The trick is knowing that you should ask. Chase frames this as compensating for the user's lack of domain expertise, which is exactly right. The gap between a novice prompt and an expert prompt is not vocabulary; it is knowing which questions to ask in the first place.

Skills: Just Prompts All the Way Down

The demystification of skills is one of the tutorial's stronger segments. Chase strips away the mystique:

There's nothing secret about these skills. The front-end design skill is an official Anthropic skill that they created and you can take a look at on Anthropic's GitHub. This is all it is. It's just a text prompt.

This matters because the ecosystem around AI coding tools has developed a tendency to repackage simple concepts in complex wrappers. A skill is a system prompt with a trigger mechanism. Understanding that removes the intimidation factor and empowers users to write their own, which is ultimately more valuable than installing someone else's.

That said, the tutorial undersells the difficulty of writing good skills. Saying "it's just a text prompt" is like saying a novel is just words. The gap between a mediocre skill and an excellent one is substantial, and it requires exactly the kind of domain expertise that the target audience lacks. The Anthropic-provided skills are good starting points, but users who treat skills as a solved problem will hit diminishing returns quickly.

Context Window Management: The Real Skill

The discussion of context window degradation is perhaps the most technically substantive part of the tutorial. Chase cites Anthropic's own benchmarks showing Opus 4.6 dropping from near-perfect performance to roughly 78% effectiveness at full context load, and recommends staying below 200,000 tokens:

That first 200,000 tokens, that's like the green zone. That's the gold zone. We always want to stay in the first 200,000 if we can help it.

This is sound advice, though the framing could be more precise. Context degradation is not linear, and the nature of the degradation matters. Models do not simply get "dumber" as context grows; they become worse at attending to specific details buried in the middle of long contexts. For coding tasks, this means the model may lose track of architectural decisions made earlier in the conversation while still performing well on local, self-contained tasks. The practical implication is that /clear is not just a performance optimization; it is a forcing function for clean task decomposition.

The CLI Over MCP Claim

Chase makes the bold assertion that CLIs are replacing MCP (Model Context Protocol) servers as the preferred integration pattern:

Gone are the days where everyone and everything is becoming an MCP. For a year and a half, that's all you heard about. MCPs, MCPs, MCPs. Well, as cool as MCPs are, they're kind of going by the wayside and they're being replaced by CLIs.

This claim is debatable. MCPs and CLIs serve different architectural purposes. A CLI tool like Playwright or the GitHub CLI is a standalone program that Claude Code invokes through shell commands. An MCP server provides a structured protocol for tool discovery, invocation, and response handling. The two are not mutually exclusive, and in practice, many sophisticated setups use both. What is true is that for simple integrations, a CLI is lower overhead than spinning up an MCP server. But declaring MCPs dead is premature and likely reflects the YouTube tendency to frame everything as a paradigm shift.

What the Tutorial Leaves Out

Conspicuously absent from a video targeting beginners is any discussion of version control beyond "commit and push." There is no mention of branching, pull requests, code review, or the basic Git workflow that protects developers from their own mistakes. For a non-technical audience being told to give an AI full filesystem access, this omission is notable. The deployment section, while serviceable, also skips over environment variables, secrets management, and the basics of what happens when something goes wrong in production.

The tutorial also does not address cost. Claude Code with an Opus 4.6 backend is not cheap, and a beginner who follows this tutorial's advice to iterate rapidly through plan mode, skills, and testing could burn through substantial API credits in a single session. A brief mention of token economics would have been responsible.

Bottom Line

Chase's tutorial is a competent orientation for absolute beginners, strongest when it discusses prompting philosophy and context management, weakest when it glosses over security and foundational software concepts. The central paradox it cannot resolve is the one it correctly identifies: AI tools let people build things they do not understand, and the tutorial itself is an instance of this pattern, teaching users to operate a powerful tool without building the mental models needed to use it safely. The advice to "take an active role in your education" is the most important line in the entire thirty-one minutes, and it deserves more than the brief aside it receives.

Deep Dives

Explore these related deep dives:

Sources

Learn 90% of Claude code in 31 minutes

by Chase H · Chase H AI · Watch video

If you're just getting started with cloud code, it can feel incredibly confusing, especially if you don't come from any sort of technical background. And it doesn't help that half the advice out there is completely outdated. People telling you to use CloudMD files you don't need, pushing you towards MCP servers that don't make sense, and generally promoting workflows that completely pollute your context window. But in the next 30 minutes, we're going to completely cut through the BS.

We're going to talk about what actually matters so you can get the most out of Cloud Code in 2026. So, let's begin by talking about how you can first get Claude Code installed and then jump into the first confusing part which is like where do you use it? Because there's like four different ways you can use Cloud Code. Now, for the install, it's very easy.

If you just Google Claude Code install, it will bring you here the Cloud Code documentation page and it gives you the one command you need to run depending on your operating system. So, I'm in Windows. So, I would just copy this. I would just go to the search bar, look up PowerShell, open up, open up your version of the terminal, and then I would just paste it in there and run it.

Now, does that mean you have to use cloud code inside the terminal? What about things like cloud code in the cloud desktop app? What about co-work? And what about IDES like VS Code or Cursor, Anti-gravity?

I heard you can use cloud code inside of there. And I also heard you can use cloud code through a browser. Like, it's kind of confused. There's almost too many options.

Which one should you use? Well, the good news is you can't really go wrong with any of them. Even if you decide to use something like co-work for I would say about 95% of the people and your common use cases, whatever you can do in co-work, you can do inside the terminal and in between. Now, just understand it's on a spectrum and it's a spectrum of control.

If I'm inside the terminal, I have more control and insight to what cloud code is doing on my machine at any one time. On the other end of the spectrum, we have co-work. and co-work trades some ...