The Terminal as Operating System
Chase's roundup of ten command-line tools for Claude Code arrives at a moment when the AI coding ecosystem is undergoing a quiet but significant architectural shift. The thesis is straightforward: Claude Code lives in the terminal, CLI tools live in the terminal, and the friction of Model Context Protocol servers is increasingly unnecessary overhead. Whether that thesis holds up under scrutiny depends on what kind of work developers are actually doing.
The most striking claim in the piece is the directional one about MCPs versus CLIs:
We're moving away from MCPs. We're moving into CLIs because it just makes sense. Cloud code lives in the terminal. CLIs live in the terminal. There's no overhead. It's like just a straight connection and allows cloud code to do the most with the least amount of tokens.
This framing deserves some pushback. MCPs were designed to solve a real problem: giving language models structured, discoverable access to external services with well-defined schemas. CLI tools solve a different problem: giving humans (and now AI agents) scriptable access to services through text commands. The fact that Claude Code can use both does not make them interchangeable. A CLI tool that outputs unstructured text requires the model to parse that output, which is itself a source of errors and token consumption. The "90,000 fewer tokens" figure Chase cites from the Playwright comparison is compelling, but a single benchmark does not establish a universal principle.
The Meta-Tool and the Bootstrapping Problem
The list opens with CLI Anything, a tool that generates CLI tools from open-source projects. This is a genuinely interesting concept from the creators of LightRAG, and it raises a question that Chase breezes past: what happens when the generated CLI is wrong? If Claude Code uses CLI Anything to create a Blender CLI wrapper, and that wrapper misrepresents Blender's capabilities or mishandles edge cases, the failure mode is subtle. The AI agent will confidently use a tool that does not work correctly, and the developer may not realize it until much later.
This is a CLI tool that creates other CLI tools. This thing is completely open-source and it's from the makers of light rag and rag anything. So these guys are kind of titans in the AI open source world.
The pedigree of the creators is not in question. But meta-tools that generate other tools compound the trust problem rather than solving it. Each layer of abstraction is another place where assumptions can silently break.
NotebookLM as a Video Processing Proxy
The NotebookLM CLI integration is arguably the most practically useful tool on the list, and Chase identifies exactly why:
It solves one of the issues with Claude Code and Son and Opus in general is the fact that they can't really handle videos. Notebook LM can. I can just throw YouTube URLs at Notebook LM. It will do all the analysis for me for free because these tokens are on Google servers, not ours.
This is a clever architectural move. Rather than waiting for Claude to develop native video understanding, developers can use NotebookLM as a preprocessing layer. The economics are appealing too: Google absorbs the compute cost of video analysis, and Claude Code receives structured text output. The catch, which goes unmentioned, is dependency risk. Google could change NotebookLM's capabilities, pricing, or API access at any time, and a workflow built on this integration would break without warning.
The Obvious Picks and the Dangerous Ones
Several tools on the list fall into the "obvious infrastructure" category. GitHub CLI, Vercel CLI, Supabase CLI, and FFmpeg are all well-established tools that any developer working in those ecosystems would already know about. Their inclusion feels more like padding than discovery. Chase acknowledges this with the GitHub CLI:
If we are doing anything where we are writing code and we want to push to GitHub, there is no reason why we wouldn't just use the GitHub CLI to do this, right?
Fair enough, but this is table stakes, not a revelation.
The Stripe CLI entry is more interesting, not because of the tool itself, but because of the tension it reveals. Chase correctly notes that Stripe's web interface is painful to navigate and that the CLI can automate product creation. But then comes the caveat:
When you are dealing with things that have to do with money and transactions, like obviously you still want to test these out by hand.
This is the right instinct, and it applies far more broadly than Chase lets on. Any CLI tool that touches production systems with real consequences, whether financial, security, or data integrity, requires a different level of scrutiny than one that manipulates video files. The article treats all ten tools with roughly equal enthusiasm, but the risk profiles are wildly different.
The Skills Tax
A recurring theme is the need for "skills," essentially prompt files that teach Claude Code how to use each CLI tool effectively. Chase frames this as a minor installation step, but it represents a real maintenance burden that scales poorly:
Skills aren't a huge context window, you know, drag. But if you have too many of them, triggering the right one becomes a problem.
This is an underappreciated point that deserves more attention than it receives. Every skill loaded into Claude Code's context competes for the model's attention. As the number of CLI tools grows, developers face a curation problem: which tools deserve permanent context space, and which should be loaded on demand? Chase's own advice for the Google Workspace CLI, to have Claude Code analyze the repo and recommend which skills to install, is a pragmatic solution, but it also reveals the underlying tension. The "just install everything" approach does not scale.
The Security Elephant
The Google Workspace CLI entry, saved for last, is where the article's breezy tone becomes genuinely concerning. Giving an AI agent access to email, documents, and spreadsheets is not a productivity hack. It is a security decision with significant implications. Chase mentions sandboxing and Google's Armor feature for prompt injection protection, but the treatment is superficial:
Do we necessarily want Cloud Code to have access to all our emails? But luckily, it's not too hard to set up the GWS CLI tool in a way where we almost like sandbox cloud code.
The word "almost" is doing enormous work in that sentence. Prompt injection attacks against AI agents with tool access are an active area of security research, and the defenses are not yet mature. A developer who follows this advice without understanding the threat model could expose sensitive communications to exfiltration through carefully crafted emails that manipulate Claude Code's behavior.
Bottom Line
Chase's list is a useful survey of the CLI-centric approach to extending Claude Code, and the core insight about CLIs being a natural fit for terminal-native AI agents is sound. The NotebookLM integration and CLI Anything represent genuinely novel approaches to capability extension. But the article's enthusiasm outpaces its caution. The shift from MCPs to CLIs is not as clean as presented, the security implications of tools like GWS deserve far more than a paragraph, and the skills management problem will only get worse as the ecosystem grows. Developers would be well served to adopt two or three of these tools deeply rather than bolting on all ten and hoping Claude Code figures out when to use each one.