← Back to Library

The bitter lesson versus the garbage can

Ethan Mollick challenges a foundational assumption of corporate AI strategy: that you must first fix your messy processes before deploying artificial intelligence. Instead, he argues that the very chaos organizations fear might be irrelevant if AI can be trained solely on desired outcomes, rendering decades of operational refinement potentially obsolete.

The Illusion of Control

Mollick opens by invoking a classic study by Ruthanne Huising, where teams mapping their own company's workflows discovered a startling reality. The process revealed "entire processes that produced outputs nobody used, weird semi-official pathways to getting things done, and repeated duplication of efforts." The emotional core of this discovery is captured when a manager shows the map to the CEO, who "sat down, put his head on the table, and said, 'This is even more fucked up than I imagined.'" The executive realized his grasp on the organization was "imaginary."

The bitter lesson versus the garbage can

This anecdote sets the stage for the "Garbage Can Model" of organizational theory, which views companies not as rational machines but as chaotic bins where problems and solutions collide randomly. Mollick notes that this messiness is precisely why scaling AI is so difficult; traditional automation requires clear rules, yet "even though 43% of American workers have used AI at work, they are mostly doing it in informal ways, solving their own work problems." The prevailing wisdom suggests companies must spend months untangling these knots before they can automate. Mollick finds this approach intuitive but potentially wrong.

"The effort companies spent refining processes, building institutional knowledge, and creating competitive moats through operational excellence might matter less than they think."

The Bitter Lesson

The pivot in Mollick's argument comes from computer scientist Richard Sutton's "Bitter Lesson," a concept suggesting that human attempts to encode expertise into AI are often less effective than simply throwing more computing power at the problem. Mollick illustrates this with chess: early attempts to beat humans involved hard-coding centuries of strategy, but "Deep Blue... used some chess knowledge, but combined that with the brute force of being able to search 200 million positions a second." Later, AlphaZero beat humans with "no prior knowledge of these games at all," learning purely by playing against itself.

Mollick argues that this pattern is about to collide with the workplace. He contrasts two types of AI agents: those built with "carefully crafted" rules and those trained via reinforcement learning on outcomes. He tested both by asking them to create a graph comparing chess ratings. The hand-crafted agent, Manus, followed a rigid, human-designed to-do list. The outcome-trained agent, ChatGPT agent, "charted whatever mysterious course was required to get me the best output it could." The result? The outcome-trained agent produced a working Excel file and found more credible sources, while the hand-crafted version failed.

The implication is stark. If the Bitter Lesson holds, the path to better AI isn't better engineering of the agent's internal logic, but simply "more computer chips and more examples." Mollick writes, "Decades of researchers' careful work encoding human expertise was ultimately less effective than just throwing more computation at the problem." This suggests that the "bespoke knowledge" companies hoard as a competitive advantage may soon be worthless.

Critics might note that this view assumes all organizational tasks are as solvable as chess. Unlike a game with clear rules and a definitive win state, business problems often involve ambiguity, ethical nuance, and human relationships that brute force computation cannot easily navigate. A counterargument worth considering is that without understanding the "why" behind a process, an AI might optimize for the wrong metric or create dangerous shortcuts.

Navigating the Chaos

Mollick concludes by flipping the script on the despairing CEO. If the Bitter Lesson applies to work, the executive doesn't need to fix the broken process; they just need to define the output. "Instead of untangling every broken process, he just needs to define success and let AI navigate the mess." In this future, the undocumented workflows and informal networks that plague organizations become invisible to the AI, which simply learns to produce the desired result regardless of the path taken.

"In a world where the Bitter Lesson holds, the despair of the CEO with his head on the table is misplaced."

This reframing suggests a radical shift in management strategy. Companies that spend years mapping processes might be outpaced by competitors who simply define quality and feed data to powerful models. The competitive moat shifts from "how well we know our own operations" to "how clearly we can define success and how much data we have."

Bottom Line

Mollick's most compelling insight is that the obsession with process optimization may be a distraction in the age of outcome-trained AI, potentially rendering traditional operational excellence obsolete. However, the argument's greatest vulnerability lies in assuming that complex, human-centric organizational problems can be solved as easily as a chess game, ignoring the risks of opaque decision-making in high-stakes environments. Readers should watch for early evidence of whether outcome-trained agents can truly navigate the ethical and logistical gray areas of the real world without human oversight.

Sources

The bitter lesson versus the garbage can

by Ethan Mollick · One Useful Thing · Read full article

One of my favorite academic papers about organizations is by Ruthanne Huising, and it tells the story of teams that were assigned to create process maps of their company, tracing what the organization actually did, from raw materials to finished goods. As they created this map, they realized how much of the work seemed strange and unplanned. They discovered entire processes that produced outputs nobody used, weird semi-official pathways to getting things done, and repeated duplication of efforts. Many of the employees working on the map, once rising stars of the company, became disillusioned.

I’ll let Prof. Huising explain what happened next: “Some held out hope that one or two people at the top knew of these design and operation issues; however, they were often disabused of this optimism. For example, a manager walked the CEO through the map, presenting him with a view he had never seen before and illustrating for him the lack of design and the disconnect between strategy and operations. The CEO, after being walked through the map, sat down, put his head on the table, and said, "This is even more fucked up than I imagined." The CEO revealed that not only was the operation of his organization out of his control but that his grasp on it was imaginary.”

For many people, this may not be a surprise. One thing you learn studying (or working in) organizations is that they are all actually a bit of a mess. In fact, one classic organizational theory is actually called the Garbage Can Model. This views organizations as chaotic "garbage cans" where problems, solutions, and decision-makers are dumped in together, and decisions often happen when these elements collide randomly, rather than through a fully rational process. Of course, it is easy to take this view too far - organizations do have structures, decision-makers, and processes that actually matter. It is just that these structures often evolved and were negotiated among people, rather than being carefully designed and well-recorded.

The Garbage Can represents a world where unwritten rules, bespoke knowledge, and complex and undocumented processes are critical. It is this situation that makes AI adoption in organizations difficult, because even though 43% of American workers have used AI at work, they are mostly doing it in informal ways, solving their own work problems. Scaling AI across the enterprise is hard because traditional automation requires clear rules and defined ...