(this is the title that will be displayed separately)
The Pitch
Anthropic just released something most people missed. While everyone focused on Opus 4.6 benchmarks, they quietly shipped agent teams inside Claude Code — and it represents a genuine leap forward. This isn't just parallel sub-agents running in isolation. These agents talk to each other, coordinate their work, and report to a team lead. They function like an actual development shop. After testing it extensively, the author believes this could fundamentally change how developers build complex projects.
What Agent Teams Actually Are
Agent teams aren't standard sub-agents working in parallel. They're something more sophisticated: a coordinated team with a middle manager overseeing multiple specialized agents who can communicate directly with each other.
In a typical sub-agent setup, you have three agents — say one handling UI, one for backend, one for databases. They operate like freelancers hired for specific tasks. They complete their individual work and return results to the main Claude Code instance. They never speak to each other. The database agent has no idea what the backend agent is doing.
Agent teams flip that dynamic. When you spin up these specialized agents, they now have a team lead coordinating everything. More importantly, the sub-agents can communicate directly — backend talks to UI, UI talks to database, and so on. It's a real team dynamic where multiple Claude Code instances run in parallel but share information freely.
How They Differ From Standard Sub-Agents
The key differences come down to communication and coordination:
Standard sub-agents work in silos. Each completes one specific task in isolation and returns results. They're essentially mercenaries hired for individual jobs — do the task, report back, done.
Agent teams create a middle manager layer. Sub-agents report to this team lead who coordinates everything and ensures all pieces fit together logically. The agents can talk to each other directly rather than only communicating through the main instance.
For complex projects requiring multiple integrated modules, agent teams produce better outcomes. For simpler one-off tasks, standard sub-agents remain more efficient — they use fewer tokens since there's no coordination overhead.
Anthropic's documentation identifies four areas where agent teams excel: research and review, building new modules or features, debugging with competing hypotheses, and cross-layer coordination between different system components.
How to Enable the Feature
Agent teams are disabled by default. This experimental feature requires changing an environment variable in settings.json from zero to one. The simplest approach: paste the documentation link into Claude Code and say "enable agent teams." It will modify the file automatically. Then restart your Claude Code session.
Crucially, you must explicitly prompt for agent teams. Simply describing a project won't trigger them. You need to use the exact verbiage "create an agent team" or something very similar because that's the trigger. Without that explicit instruction, Claude Code defaults to its standard single-instance mode.
What the Comparisons Actually Show
The author ran side-by-side tests between agent teams and standard Claude Code using identical prompts.
For a relatively simple AI-powered proposal generator, both versions produced nearly identical results — functionally equivalent with only minor UI differences favoring the teams version. No meaningful gap emerged for straightforward applications.
The more complex internal dashboard project revealed clearer distinctions. The agent teams version built six separate modules that integrated coherently: client pages with status and retainers, a projects page functioning as a Kanban board with subtasks and time entries, an invoices module tied to time tracking, and a settings section — all working together seamlessly.
These sub-agents can actually talk to one another. So you actually have sort of a real team dynamic where you have all these individual sessions that have been spun up but they can talk to each other.
The teams version delivered noticeably better UI polish across the board. The standard version produced functional but less refined interfaces. However, the token costs ran significantly higher — approximately 330,000 tokens for one complex dashboard versus substantially fewer for single-agent work.
Counterpoints
Critics might note that the dramatic improvements claimed don't hold up in simpler applications. In the first comparison, virtually no difference existed between teams and standard Claude Code. The author himself admitted it wasn't "necessarily a huge mind-blowing difference" even on the more complex dashboard — calling the gains primarily UI polish rather than fundamental capability.
The token costs represent real tradeoffs. Agent teams consume significantly more resources due to coordination overhead, which matters for users on tight budgets or limited compute plans.
The feature remains experimental. Anthropic continues building it out — it's not a finished product and may change substantially as they develop further.
Bottom Line
Agent teams represent genuine innovation in coordinated AI development. The ability for multiple specialized agents to communicate directly rather than through a single instance transforms how complex applications get built. For large, multi-module projects requiring integration across UI, backend, database, and other layers, this approach clearly outperforms standard sub-agents — producing more polished results with better coherence.
Watch for two developments: first, Anthropic's continued refinement of the feature as it moves past experimental status; second, how token costs evolve as users deploy larger teams. The gap between simple prompts and complex multi-module work suggests agent teams may matter most precisely where projects are hardest — not on straightforward tasks that both approaches handle equally well.