← Back to Library

45 People, $200M Revenue. The Question Nobody's Asking About AI and Your Team Size.

45 People, $200M Revenue. The Question Nobody's Asking About AI and Your Team Size.", "author": "Nate B Jones", "source": "Nate B Jones", "pitch": "Jones makes a counterintuitive argument: AI hasn't fixed your meetings problem—it exposed that you never had a meetings problem to begin with. The real issue is team size, and most organizations are still operating with teams three to ten times too big. Using research from evolutionary psychology, military structure, and software engineering, he shows why five-person teams powered by AI are producing 5-10x more value than traditional approaches—and why that changes everything about how you should think about organization design.", "body": "## The Meeting Problem No One Solves

Meetings have tripled since 2020. Nobody can explain where the new ones come from. You probably spent twelve hours in meetings last week. Studies show it's closer to sixteen hours for people managers, and twenty-three hours for executives. The standup could be a Slack message. The cross-functional sync where eight people attend and two talk. The alignment session that produces another alignment session.

AI note-taking apps are barnacles. They amplify output through a coordination structure that's fundamentally broken. AI did not fix this problem.

The Real Problem Is Team Size

You think you have a meetings problem, but you don't. You have a team size problem.

Team size determines every hour of how we spend our working days. How many Slack channels we have to monitor, how many approvals we have to wait for, how many people we have to align before we can ship anything. It shapes our costs, our speed, and the quality of every decision the organization makes.

AI broke all of that, and we never figured out the root cause.

The Number Five

The number of communication pathways between people in a group is defined mathematically. With five people, it's ten pathways. Every person can hold the full map in their head. With ten people, it jumps to forty-five pathways. Twenty people, one hundred ninety pathways.

Robin Dunbar's research on primate neocortex size established that human brains have layered limits on relationship complexity. Five for your core group, fifteen with deep trust, fifty for meaningful working relationships, one hundred fifty for stable social connections.

Army mathematicians confirmed the pattern empirically. Groups of five communicated most effectively, with effectiveness peaking again at fifteen, fifty, and one hundred fifty. The military tested this because their stakes are so high they have to get it right.

A US infantry fire team is four people plus a leader. The layers above track Dunbar's hierarchy almost exactly—the squad, the platoon, the company.

Jeff Bezos landed on the same number from a different direction with his two-pizza team. Brooks got there in 1975 through software engineering. Adding people to a project made it slower, not faster. It's been a long time since 1975 and executives still think adding software engineers makes things go faster. It doesn't. The communication overhead always overwhelms the added capacity.

Three disciplines—evolutionary psychology, military preparedness, software engineering—all came to the same answer in group size. The human brain can sustain deep high-context coordination with about five people.

What AI Actually Changed

The standard narrative says AI makes people more productive so teams can be smaller. It's true as far as it goes, but it's not far enough.

Before AI, a five-person team produced X output. Adding a sixth person gave you more capacity but with diminished returns because coordination overhead grew faster than output. Toby Lutke at Shopify calls this a ten-times loss of productivity with each addition beyond five.

After AI, the same five-person team produces five to ten times more than before. The evidence is in the revenue per employee data of AI-native companies. They're all stunningly larger than typical SaaS companies—Lovable, Midjourney, 11 Labs, Anthropic, OpenAI.

The SaaS benchmark per employee for revenue has been in the hundreds of thousands of dollars, usually below half a million. AI native companies are running five to ten times that typically.

Here is what reframes this conversation: if each person on a five-person team is producing in the range of two to three million dollars a year in value, the coordination cost of the sixth person is no longer a minor tax. It's a catastrophe.

That sixth person doesn't just need to be good. They need to justify their coordination cost against a baseline where every existing member generates output that previously required an entire department.

The penalty for adding a human to a team increases as the per-human output increases. When each person produced two hundred fifty thousand dollars a year, the coordination cost of person number six was manageable. At two million dollars per person, it's measured in millions of lost productivity.

This is why your meetings are killing you. Every meeting exists because someone decided coordination was worth the cost. When your per-person output was two hundred fifty thousand dollars, it often was worth the cost. Two million dollars per person, most of those meetings end up being net negative—destroying value at a rate that scales with how productive your people are.

Volume Is Free. Correctness Is Scarce.

Every conversation about AI and teams obsesses over volume: more code, more content, faster. This leads to disastrously incorrect organizational decisions.

Volume is no longer a scarce resource. AI made volume cheap. What's scarce is correctness—whether the thing you shipped is actually right, architecturally sound, strategically coherent, right for the customer, polished, free of the subtle errors that look fine in a demo and compound into real failures in production.

A Harvard Business School field experiment published in 2025 tested this directly. Researchers studied seven hundred seventy-six professionals at Proctor and Gamble on real innovation challenges. Teams using AI were three times more likely to produce ideas in the top ten percent of quality—not three times more output, but three times more likely to be right at the highest level.

The researchers also found that AI broke functional silos. Both R&D and marketing produced more balanced integrated ideas with AI, extending each person's competence into adjacent domains.

This is the mechanism that makes small AI-augmented teams more powerful than large ones. Five really excellent people using AI can each operate across a broader domain than they could alone. They don't need ten specialists in ten narrow lanes. They need five generalist architects who use AI to extend their reach and who use each other as verification against AI's errors.

But verification is the catch. Every piece of AI output requires human judgment to validate. In a five-person team, each person reviews a manageable volume against a coherent shared context. And if you get into a world like agentic workflows where you have agents reviewing for correctness, the team can easily layer up a level of abstraction and manage that workflow. They know what right looks like because they all hold the same mental model, and that enables them to scale.

In a twenty-person team, the AI output multiplies by another factor of four, but the shared context degrades catastrophically. So they hold meetings to synchronize. Those meetings generate more decisions, more AI tasks, more output to verify, more meetings.

Wes McKini described this as the agentic tarpit—agent sessions producing contradictory plans, AI generating technical data at machine speed. The working prototype has become trivially easy to produce. Getting from prototype to production still requires for most organizations a fair bit of human judgment.

And there is human involvement on the way to production. And the humans doing that judgment need a shared model of what they're building. The larger the team, the weaker that shared model.

So a team of five optimizes for correctness and a team of twenty is optimized for volume. In a world where AI makes volume free, optimizing for volume is optimizing for the wrong thing.

This is why your big teams can feel productive. They produce a lot. There's lots of Jira tickets. And also why they keep shipping things that don't quite work, that need rework, that require postmortems, and that spawn follow-up projects to fix the problems created by the last project.

Volume masquerades as progress and leads to pseudo-work. Correctness is progress.

Two Archetypes: Scouts and Strike Teams

The first archetype is Scouts. Scouts operate alone—one person, full AI toolkit, defined mission. The work is exploration. Is this technology viable? Is this market real? Can we build a prototype? Scouts move fast because they have zero coordination overhead. Their constraint is one person's judgment.

Peter Steinberger demonstrated the Scout model at its extreme. In roughly sixty days running four to ten coding agents in code simultaneously, he built Open Claw—an AI agent he's covered extensively. He built it in a language he'd never used. He directed agents at the architectural level while they handled execution. One person, twenty years of judgment, a swarm of agents, and the output was something that the world's most valuable companies were desperate to acquire.

But the solo model has limits. It works when the work is exploration—high ambiguity, low coordination, a premium on speed and individual taste. Peter's vision of Open Claw was something he could create alone. He had it in his head. He made it exist. He made it real.

Anyone who's used OpenClaw will tell you—and he's covered this extensively—it shipped with lots of holes. It does not work to be a team of one when correctness requires multiple perspectives. And as discovered, Peter ended up joining OpenAI to translate his vision at scale. It does not work when the cost of being subtly wrong is very high. It does not work to be a team of one when sustained production is the goal for a long time.

The one-person model is a great scout. The five-person model is a strike team, and both are correct for different missions.

Strike teams are teams of five people with AI executing where correctness matters. Every person's AI-generated output passes through at least one other brain that shares enough context to catch meaningful errors at the correct level of abstraction.

If you're designing agentic coding systems, you're operating at a level above the code, but you're still operating with a layer of shared context that you can use to catch real issues. So a team of five can cover product, engineering, design, data, and domain expertise—not necessarily with five different hats, but across that team of five together.

That is the real minimum surface area for a complete decision. Below five, you tend to have blind spots. Above five, you tend to have silos. And in a team of five, there is nowhere to hide, which is exactly what you want.

Scouts explore, strike teams execute. Scouts map territory, strike teams build the road.

Most organizations currently have neither of these units. They have oversized teams that are too slow for exploration and too diluted for precision execution, burning their best people out on coordination overhead and lowering the overall value of their output significantly. And they wonder why AI doesn't work, and they're stuck in meetings.

The Capacity Question

What drives Jones crazy about the conversation around AI and team size is how everyone frames it as a cost story. We can do the same work with fewer people. That's the headline. That's the strategy deck. Same mission. Fewer bodies. Lower burn.

It's a staggering failure of imagination for companies with strong talent forces. You have five hundred people. Each just got at least potentially five to ten times more capable. The correct response is not: I can run my company with fifty people. The correct response is: I have the capacity of twenty-five hundred to five thousand people.

What was previously unable to do?

Your five-hundred-person company just acquired the productive capacity of say a three-thousand-person company without hiring anyone, without raising capital, without building new offices. You did not get a cost reduction. You got an army.

The question is whether you have the strategic vision to deploy it or whether you're going to use effectively a fleet of aircraft carriers on the same fishing route your troller used to run.

The companies that get this right aren't starting with the assumption they need to cut heads. Some people will deliberately not want to make the transition to AI. That's sad. But where people are excited to move, you can recompose people into strike teams. You can massively expand the number of fronts that you're building on and you can deliver an extraordinary amount of value with a better team size equipped with AI.

Think of a SaaS company—think of them anyway because guess what? They're still a system of record. Think of a SaaS company with four hundred engineers maintaining one product restructured into eighty different strike teams. How much can they build? Can they build a platform with ten products? It's entirely plausible.

Think of a regional insurer. They have two hundred people. They serve three states. If they were reorganized into forty strike teams powered by AI, what could they do?", "counterpoints": "Critics might note that Dunbar's number is based on social relationships and not necessarily applicable to task-based coordination in professional settings—the research focused on personal networks, not work output. A counterargument worth considering: the revenue per employee metric conflates productivity with value capture—AI-native companies may simply be better at monetizing their output rather than producing more of it. Additionally, the piece assumes that AI-augmented teams can operate across domains without specialization, but some industries genuinely require deep expertise in narrow areas where five generalists cannot replace fifty specialists.", "pull_quote": "In a world where AI makes volume free, optimizing for volume is optimizing for the wrong thing.", "bottom_line": "Jones's strongest argument is structural: when AI makes volume cheap, teams optimized for volume are optimizing for the wrong thing. His use of Dunbar's research provides a compelling framework for why five-person strike teams outperform larger organizations—but his weakest point is assuming that social relationship limits translate directly to professional coordination. The strategic insight is real: most organizations should stop trying to do more with less and instead think about deploying dramatically expanded capacity. Watch for whether your organization is making the mistake of cutting team size rather than redistributing it into strike teams.

All those AI note-taking apps are barnacles. They're just wrong. No one is thinking about meetings correctly in the age of AI. And the meetings keep multiplying.

You probably spent 12 hours in meetings last week. And that's if we're lucky because when the studies come out, it's like 12 hours on average, 16 hours for people managers, 23 hours for execs. The numbers keep going up. The standup could be a Slack message.

The cross functional sync where eight people attend and two people talk. The alignment session that produces an alignment session. You get the idea. Meetings have tripled since 2020.

Nobody can explain where the new ones come from. AI did not fix this problem. And AI adding notetakers isn't fixing it either. Why?

Because AI broke the math on team size and not in the way that people tend to think. AI is changing how our teams operate and we're not thinking about it and we're just trying to do what we usually do. So you think you have a meetings problem, but you really don't. You have a teamsize problem.

And before you jump to the conclusion that I am recommending firing people, I am not going to do that. And you're going to see why. Your teams are still three to 10 times too big though and every AI tool you adopt is making it worse. Like all those note-taking apps, it's amplifying output through a coordination structure that is fundamentally broken.

Team size determines every hour of how we spend our working days. How many Slack channels we have to monitor, how many approvals we have to wait for, how many people we have to align before we can ship anything. It shapes our costs. It shapes our speed and it shapes the quality of every decision the org makes.

Ultimately, it shapes what gets into customers hands. AI broke all of that and we never figured out the root cause and we're not responding to it. Well, I'm going to get into how we think about it correctly, how we think about team size and why this doesn't mean firing people. First, we're going to talk about the number five.

The number of communication pathways between people in a group is defined mathematically. If it's five people, it's 10 pathways. Every person can hold the full map in their head. If it's ...