← Back to Library

The economy of knowing

Stuart Buck and Aishwarya Khanduja propose a structural diagnosis for the scientific enterprise that has been missing for decades: the field is trying to fix a broken operating system while only patching the user interface. They argue that "metascience" has become a confused catch-all term, failing to distinguish between the macro-level political economy of funding and the micro-level psychology of discovery. This distinction is not merely academic; it is the key to understanding why well-intentioned reforms often backfire, creating a system that looks functional on paper while rotting from the inside.

The Economy of Knowing

The authors begin by drawing a parallel to the history of economics, noting that for over 80 years, the field has successfully separated microeconomics (individual decisions) from macroeconomics (aggregated patterns). "Metascience needs the same clarity," they assert. Just as economics analyzes how incentives work in the marketplace for goods, metascience must analyze the incentives in the marketplace for ideas. The core problem, according to Buck, is that we are "trying to repair an epistemic economy without naming the fact that there is an economy, and that economies have both large-scale markets and individual minds."

The economy of knowing

This framing is powerful because it moves the conversation away from blaming individual scientists for bad behavior and toward analyzing the system that rewards that behavior. The authors suggest that individual rationality often produces collective irrationality. A scientist might make the perfectly logical choice to avoid risky projects to secure tenure, but when every scientist does this, the entire field stagnates. "If metascience were software, we've been trying to fix bugs in the user interface while ignoring the operating system—or vice versa," Buck writes. "We need full-stack development."

If metascience were software, we've been trying to fix bugs in the user interface while ignoring the operating system—or vice versa. We need full-stack development.

The Macro-Micro Divide

The article clearly delineates the two spheres. Macro-metascience concerns the "political economy of funding, conducting, and publishing science at scale," including the governance of agencies like the National Institutes of Health (NIH) and the design of grant mechanisms. Micro-metascience, conversely, focuses on the "phenomenology of discovery": how researchers actually experience reasoning, who they trust, and how they form conviction.

Buck illustrates the disconnect with a vivid example of the "whisper network." He describes the scene at a scientific conference where researchers privately admit they cannot replicate a superstar's work but dare not say so publicly. "The gap between what scientists know privately and what they can say publicly is itself a micro phenomenon with macro consequences," he notes. This silence allows unreliable methods to persist because the macro-level system appears functional, filled with papers and citations, while the micro-level reality is one of epistemic dysfunction.

This observation is particularly sharp when applied to the replication crisis. The authors argue that reforms often fail because they target only one level. For instance, preregistration was introduced at the macro level to combat publication bias. Theoretically, requiring researchers to commit to hypotheses before seeing data should improve quality. However, at the micro level, scientists fearing rejection may "game the system by preregistering every potential hypothesis and outcome." Buck points out that in some fields, researchers now submit vague preregistrations that allow maximum flexibility, rendering the macro-level intervention less effective than anticipated.

Critics might argue that the distinction between micro and macro is too rigid, as scientific culture is deeply embedded in institutional structures. However, the authors' insistence on studying the feedback loops between the two levels offers a necessary corrective to one-sided reform efforts.

The Human Cost of Systemic Failure

The most compelling evidence the authors provide is the case of Katalin Karikó, the Hungarian biochemist whose decades of struggle to develop mRNA therapeutics was repeatedly rejected by the macro-level system. "At the macro level, institutions repeatedly rejected her work: she was demoted and was told her research had no future," Buck writes. Yet, her individual persistence at the micro level led to a Nobel Prize and a medical revolution.

This historical context is crucial. It highlights the concept of "Lost Karikós"—geniuses who possessed the right education but were failed by a system that could not tolerate uncertainty. The authors contrast this with the "Lost Einsteins" discussed by economist Raj Chetty, who lacked the necessary education. Buck suggests we must also focus on those who were educated but crushed by the incentives of the system.

We are trying to repair an epistemic economy without naming the fact that there is an economy, and that economies have both large-scale markets and individual minds.

The article also touches on the "Pioneer awards" from the NIH, intended to sponsor high-risk research. Buck notes that a scientist who analyzed these awards found that most grantees were actually continuing the same research directions as before, treating the "high-risk" label as a marketing tactic rather than a genuine shift in strategy. This suggests that without a change in the micro-level culture of fear and scarcity, macro-level funding mechanisms are easily co-opted.

Full-Stack Solutions

The authors conclude by calling for "Full-Stack Metascience," a framework that recognizes high-level policy sets the boundary conditions for what counts as science, while epistemic culture determines how individuals react to those boundaries. "The macro-level requirement shapes the micro-level phenomenology of how scientists conceive of their own work," they argue. When the NIH emphasizes "significance" and "impact," it forces researchers to justify exploratory work with applications they cannot yet foresee.

This dual approach demands that reformers answer three questions: What are the macro-level constraints? How will individual scientists experience these constraints? And which feedback loops connect the two? Buck writes, "Only when we can answer all three questions do we understand the system we're trying to change."

The argument is robust, though it places a heavy burden on reformers to understand complex psychological dynamics alongside bureaucratic structures. A counterargument worth considering is whether the sheer scale of the scientific enterprise makes such a "full-stack" approach practically feasible, or if it risks becoming too abstract to guide specific policy changes.

We need to understand how policy shapes phenomenology, and how phenomenology shapes which policies are feasible.

Bottom Line

Buck and Khanduja provide a vital diagnostic framework that explains why decades of scientific reform have yielded mixed results: we have been treating symptoms at the wrong level of the system. The strongest part of their argument is the concrete illustration of how macro-level incentives, like preregistration or high-risk funding, can be subverted by micro-level fears. The biggest vulnerability lies in the difficulty of implementing simultaneous top-down and bottom-up changes in a massive, entrenched bureaucracy. The next step for the field is not just to identify these feedback loops, but to build institutions capable of navigating them.

Deep Dives

Explore these related deep dives:

  • Katalin Karikó

    The article uses Karikó as a key example of micro-metascience - how individual scientific persistence can succeed despite macro-level institutional rejection. Her story of developing mRNA technology despite decades of setbacks illustrates the article's core argument about the gap between individual and systemic scientific dynamics.

  • Preregistration (science)

    The article extensively discusses preregistration as a case study of how macro-level metascience reforms can fail when they ignore micro-level scientist behavior. Understanding the formal methodology and history of preregistration provides essential context for the article's critique.

  • Replication crisis

    The article references reproducibility studies, unreliable methods persisting in fields, and the gap between public scientific discourse and private 'whisper networks' about unreplicable results. The replication crisis is the underlying phenomenon that metascience reforms like preregistration attempt to address.

Sources

The economy of knowing

by Stuart Buck · · Read full article

by Aishwarya Khanduja (Analogue Group) and Stuart Buck (Good Science Project)

For more than 80 years, economics has distinguished between microeconomics (how individual actors make decisions) and macroeconomics (how those decisions aggregate into large-scale patterns).1 This distinction has proven so valuable that we’ve forgotten how confusing economics was before it.

This micro/macro distinction enabled different methodologies, different types of evidence, and different policy levers. It revealed that what happens at the individual level can look very different when aggregated up to emergent, large-scale, societal patterns. Microeconomics and macroeconomics require different tools, ask different questions, and often reveal different truths.

Metascience needs the same clarity. Just as economics is about how incentives work in the marketplace for goods and services, metascience is about how incentives work in the marketplace for ideas and truth-seeking. And these incentives play out at the same micro- and macro-levels.

The Problem: Everything Is “Metascience”.

Up to this point, “metascience” has been a broad and diffuse category, embracing everything from efforts to improve journal policies on preregistration, to large-scale analyses of citation patterns, to thought pieces on how NIH should change its grantmaking, to fraud detection and reproducibility studies, to ethnographies of labs, to launching new scientific organizations like Convergent Research or Speculative Technologies.

All of the above (and more!) has been lumped under one umbrella. This creates confusion about what metascience actually is, what methods it should use, and how different metascience efforts relate to each other. We’re trying to repair an epistemic economy without naming the fact that there is an economy, and that economies have both large-scale markets and individual minds.

Like financial economies, epistemic systems involve resource allocation (attention, funding, prestige), exchange mechanisms that create incentives (citations, collaborations), and trust (peer review, replication).

And like financial economies, individual rationality can produce collective irrationality. A scientist making the individually rational choice to avoid risky projects can contribute to a collectively irrational system where no one pursues breakthrough ideas.

To help clarify things, we should think of metascience at multiple levels, just like economics. If metascience were software, we’ve been trying to fix bugs in the user interface while ignoring the operating system—or vice versa. We need full-stack development.

The Distinction.

Macro-metascience is about the political economy of funding, conducting, and publishing science at scale: the incentives, governance, institutions, funding mechanisms, and publication systems that make progress possible (or, in many cases, more difficult at ...