← Back to Library

Making AI work: Leadership, lab, and crowd

Ethan Mollick cuts through the hype cycle to deliver a sobering truth: while AI is already supercharging individual workers, most organizations are failing to capture those gains because they haven't relearned how to innovate. The piece's most striking claim is that the bottleneck isn't the technology itself, but the atrophied muscles of corporate management that have outsourced process design for decades. For busy leaders trying to navigate this shift, Mollick offers a diagnostic framework that explains why productivity numbers look flat despite widespread tool adoption.

The Productivity Paradox

Mollick begins by dismantling the assumption that AI adoption automatically equals organizational success. He points to a disconnect between individual experience and corporate bottom lines. "AI boosts work performance," he writes, citing data where workers report cutting task times in half or even tripling their output. Yet, he notes, "Companies are typically reporting small to moderate gains from AI so far, and there is no major impact on wages or hours worked." This is the central puzzle: why does the tool work for the person but not the company?

Making AI work: Leadership, lab, and crowd

The author argues that the answer lies in the nature of organizational inertia. For years, firms have relied on consultants and off-the-shelf software to solve structural problems, a strategy that fails here because "Nobody has special information about how to best use AI at your company, or a playbook for how to integrate it into your organization." This observation is crucial; it shifts the burden of innovation from external vendors back to internal leadership. Critics might argue that expecting every company to become an R&D lab is unrealistic for smaller firms, but Mollick's point is that the cost of inaction is higher than the cost of experimentation.

We are all figuring this out together.

Leadership as the Catalyst

The commentary then pivots to the human element, specifically the role of executive vision. Mollick contends that "AI starts as a leadership problem," where the primary failure is a lack of a vivid future state. He references viral memos from CEOs who signal urgency but fail to answer the questions that actually motivate workers: "What will work be like in the future? Will efficiency gains be translated into layoffs or will they be used to grow the organization?"

This framing is effective because it addresses the psychological barrier of fear. Workers are often hiding their AI use, a phenomenon Mollick calls "Secret Cyborgs," because they suspect that revealing productivity gains will lead to punishment rather than reward. "There are more reasons for workers to not use AI publicly than to use it," he observes. The author suggests that leadership must actively dismantle these incentives by offering massive rewards for discovery and explicitly decoupling efficiency from layoffs. Without this cultural shift, the technology remains a shadow tool, used privately but never scaled.

The Lab and The Crowd

Mollick proposes a dual-engine approach to bridge the gap between individual experimentation and organizational scale: The Crowd and The Lab. The Crowd represents the frontline employees who are already discovering workflows through trial and error. However, their innovations often remain siloed. To fix this, Mollick argues for a dedicated "Lab"—a centralized team of subject matter experts and technologists tasked with "Take prompts and solutions from The Crowd and distribute them widely, very quickly."

This section highlights a shift from abstract strategy to rapid prototyping. The Lab's job is not to write white papers but to "Build fast and dirty products with cross-functional teams, centered around simple prompts and agents." Mollick illustrates this with a personal anecdote about testing an AI agent on a complex financial simulation, noting that the results were "far more thorough, than what I would expect from talented students." The argument here is that organizations need to build their own benchmarks rather than relying on generic industry tests, because "Almost all the official benchmarks for AI are flawed, or focus on tests of trivia, math or coding."

The bottleneck isn't the research anymore, it's figuring out what research to do.

Bottom Line

Mollick's strongest contribution is reframing AI adoption not as a technology rollout but as a fundamental restructuring of organizational design. The argument's vulnerability lies in the immense difficulty of executing this cultural shift; most leaders are ill-equipped to redesign workflows from scratch. The reader should watch for which organizations successfully institutionalize this "Lab" model, as they will likely be the only ones to convert individual speed into systemic advantage.

Sources

Making AI work: Leadership, lab, and crowd

by Ethan Mollick · One Useful Thing · Read full article

Companies are approaching AI transformation with incomplete information. After extensive conversations with organizations across industries, I think four key facts explain what's really happening with AI adoption:

AI boosts work performance. How do we know? For one thing, workers certainly think it does. A representative study of knowledge workers in Denmark found that users thought that AI halved their working time for 41% of the tasks they do at work, and a more recent survey of Americans found that workers said using AI tripled their productivity (reducing 90-minute tasks to 30 minutes). Self-reporting is never completely accurate, but we have other data from controlled experiments that suggest gains among product development, sales, and consulting, as well as for coders, law students, and call center workers.

A large percentage of people are using AI at work. That Danish study from a year ago found that 65% of marketers, 64% of journalists, and 30% of lawyers, among others, had used AI at work. The study of American workers found over 30% had used AI at work in December, 2024, a number which grew to 40% in April, 2025. And, of course, this may be an undercount in a world where ChatGPT is the fourth most visited website on the planet.

There are more transformational gains available with today’s AI systems than most currently realize. Deep research reports do many hours of analytical work in a few minutes (and I have been told by many researchers that checking these reports is much faster than writing them); agents are just starting to appear that can do real work; and increasingly smart systems can produce really high-quality outcomes.

These gains are not being captured by companies. Companies are typically reporting small to moderate gains from AI so far, and there is no major impact on wages or hours worked as of the end of 2024.

How do we reconcile the first three points with the final one? The answer is that AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors who develop generalized approaches that address the issues of many companies at once. That won’t work here, at least for a while. ...