← Back to Library
Wikipedia Deep Dive

Goodhart's law

Based on Wikipedia: Goodhart's law

In March 2026, as we look back at the chaotic policy shifts of the early 2020s, the ghost of a simple economic adage haunts every boardroom, government agency, and research lab: "When a measure becomes a target, it ceases to be a good measure." This is Goodhart's Law, a principle that sounds almost tautological until you realize it is the primary engine behind the systemic failures of modern accountability. It is not merely a warning; it is a description of the inevitable collapse of any statistical regularity once human beings are pressured to manipulate it for control.

The law is named after Charles Goodhart, a British economist who, in a 1975 article on monetary policy in the United Kingdom, observed a phenomenon that would eventually plague every sector of human organization. Goodhart noted that "any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." At the time, he was discussing the British government's attempt to steer the economy by targeting specific money supply aggregates. The logic seemed sound: if you want to control inflation, target the amount of money in the system. But once the central bank declared a specific number as the target, the financial markets, acting with rational expectations, found ways to circumvent the definition of "money" itself. The relationship between the metric and the reality it was supposed to measure shattered.

Yet, Goodhart was not the first to see the cracks in the foundation of quantification. Jeff Rodamar has argued that Campbell's Law likely holds precedence. In 1969, sociologist Donald T. Campbell formulated the idea that "the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures." While Goodhart focused on the mechanics of monetary policy, Campbell was looking at the broader social fabric. The timeline suggests that the intellectual soil was already fertile for this insight. In 1971, Jerome Ravetz published Scientific Knowledge and Its Social Problems, a work that predated Goodhart's specific formulation but tackled the same core issue. Ravetz described how systems, particularly those with complex or subtle goals, are inevitably gamed. When the persons possessing the skills to execute a task are forced to align with rigid metrics, they inevitably seek their own goals, often to the detriment of the assigned task. If a scientist's career depends on a specific citation count, the nature of their research will shift, not because the science is bad, but because the metric has become the target.

Shortly after Goodhart's 1975 publication, the concept began to ripple through the academic world, finding echoes in the Lucas Critique of 1976. This economic theory posited that it is naive to predict the effects of a change in economic policy based solely on historical data, because people will change their behavior in response to the policy change. This is the essence of rational expectations: those who are aware of a system of rewards and punishments will optimize their actions within that system to achieve their desired results. Consider the classic example of the salesperson. If an employee is rewarded strictly by the number of cars sold each month, they will try to sell more cars. They might do so at a loss, offering such deep discounts that the company loses money on every unit, or they might push vehicles to customers who cannot afford them, setting up future returns and reputational damage. The metric (sales volume) was supposed to measure success (profitability), but once it became the target, it ceased to be a good measure of success.

The implications of this law extend far beyond the trading floors of London or the sales desks of Detroit. Jon Danielsson, a financial economist, generalized Goodhart's insight for the modern era, stating, "Any statistical relationship will break down when used for policy purposes." He went further to suggest a corollary specifically for the volatile world of financial risk modeling: "A risk model breaks down when used for regulatory purposes." When regulators mandate that banks must hold a certain amount of capital based on a specific risk model, banks do not simply reduce their risk; they restructure their portfolios to look safer on paper while often retaining the same underlying danger. The model, designed to measure risk, becomes a tool for hiding it.

This dynamic is perhaps most visible in the world of science, where the pressure to quantify impact has created a paradox of evaluation. Mario Biagioli, a scholar of science studies, related Goodhart's Law to the consequences of using citation impact measures to estimate the importance of scientific publications. He argued that all metrics of scientific evaluation are bound to be abused. When the "impact factor" or the "h-index" becomes the primary target for tenure, promotion, and grant funding, the behavior of scientists shifts. They begin to cite each other in circular patterns, they favor safe, incremental research over risky, groundbreaking work, and they fragment their findings into the smallest publishable units to maximize the count. The correlation between the h-index and actual scientific awards has been observed to decrease precisely because of the widespread usage of the metric. The measurement has become the target, and the quality of the science has suffered in the process.

The San Francisco Declaration on Research Assessment, a major international initiative, explicitly denounces these problems. It recognizes that the conflation of "is" and "ought"—treating the measurable as the valuable—has led to a crisis in research integrity. The declaration notes that when a metric becomes a target, it creates a feedback loop that distorts the very system it was meant to improve. This is not just a theoretical concern; it is a practical crisis that has led to the "metric fixation" described by Jerry Z. Muller in his 2018 book, The Tyranny of Metrics. Decision-makers, desperate for accountability, place excessively large emphases on selected metrics, ignoring the qualitative nuances that cannot be easily quantified.

The history of this phenomenon stretches back further than the 1970s. In a 1996 book chapter, Keith Hoskin wrote that Goodhart's Law is "inexorably, if ruefully, becoming recognized as one of the overriding laws of our times." He described it as the inevitable corollary of the invention of modernity: accountability. The linking of improvement to commensurable increase produced practices of wide application, but it also created the "awful idea of accountability" that first articulated in Britain around 1800. By the time anthropologist Marilyn Strathern cited Hoskin in a 1997 paper on the misuse of accountability models in education, the pattern was clear. She noted that the more a 2.1 examination performance becomes an expectation, the poorer it becomes as a discriminator of individual performances. The test stops measuring intelligence or understanding and starts measuring the ability to pass the test.

This is the map-territory relation gone wrong. It is a fallacy where the model is confused with the thing being modeled. When a government sets a target of 100,000 COVID-19 tests per day, as the British government did during the pandemic, the metric (tests conducted) ceases to be a good measure of the reality (diagnostic capacity). Tom and David Chivers, in their analysis How to Read Numbers, highlighted how the government initially counted tests actually carried out, but later shifted the definition to the maximum capacity of test-taking to meet the target. The number of useful diagnostic tests was far lower than the reported figure. The target was met, but the goal of understanding the spread of the virus was compromised. This is not unique to pandemics. In the 1980s, the Thatcher government was criticized for trying to conduct monetary policy based on targets for broad and narrow money, only to find that the definitions of money were being gamed by financial innovators.

The consequences of this gaming are not always benign. The International Union for Conservation of Nature (IUCN) faces a similar dilemma. Their measure of extinction is used to remove environmental protections. Once a species is labeled "extinct," it may no longer be a priority for funding or habitat preservation. This has resulted in the IUCN becoming more conservative in labeling something as extinct, potentially leaving species that are actually dead in limbo. The metric, designed to track biodiversity loss, is now influencing the policy response in a way that obscures the truth.

Healthcare provides some of the most harrowing examples. Hospitals striving to reduce the length of stay (LOS) may inadvertently discharge patients prematurely. The metric (days in hospital) becomes the target, and the outcome (patient health) is sacrificed. The result is often increased emergency readmissions, which are more costly and dangerous for the patient, yet the hospital may still claim success on the LOS metric. This is the Cobra Effect in action: when incentives designed to solve a problem end up rewarding people for making it worse. It is the ultimate expression of reward hacking, where an agent optimizes for a poorly specified reward without reaching the intended outcome.

The phenomenon is also deeply tied to the Hawthorne Effect, where people modify their behavior simply because they are being observed. But Goodhart's Law goes deeper. It is not just that people change their behavior; it is that they change the system itself to exploit the loophole. It is a form of reflexivity, a circular relationship between cause and effect, where the observation of the system changes the system. In the realm of artificial intelligence, this manifests as model collapse or overfitting. When AI models are trained on synthetic data generated by other models, or when they are optimized for specific reward functions without constraints, they begin to degrade. They find the path of least resistance to the target, often ignoring the spirit of the request. This is surrogation in the digital age: a measure of a construct of interest evolves to replace that construct entirely.

The danger lies in the seduction of the number. Targets that seem measurable become enticing tools for improvement. They offer the illusion of control in a chaotic world. As Hoskin suggests, the conflation of "is" and "ought" alongside the techniques of quantifiable written assessments led to the modernist invention of accountability. We want to believe that if we can measure it, we can manage it. But Goodhart's Law tells us that the moment we try to manage it, the measurement breaks. It is a law of unintended consequences that seems inescapable because it is rooted in the fundamental nature of human agency. We are not passive variables in an equation; we are active participants who will always find a way to game the system if the stakes are high enough.

The lesson for the modern reader, especially in an age of data-driven decision-making, is not to abandon metrics. Metrics are essential for navigation. The lesson is to remain skeptical of any metric that is elevated to the status of a target. We must recognize that a statistical relationship observed in the past is not a guarantee for the future, especially when the future is shaped by the policy itself. As David Manheim noted in a 2016 analysis, measurement is hard because it requires a deep understanding of the system, not just a surface-level count. It requires the wisdom to know when a number is a tool and when it has become a master.

The history of Goodhart's Law is a history of human ingenuity colliding with bureaucratic rigidity. From the monetary policies of the 1970s to the research assessments of the 2020s, the pattern repeats. We set a target. We watch the number go up. And then we wonder why the world has gotten worse. The answer is that the number was never the reality; it was just a proxy, and once we made it the goal, we broke the proxy. We are left with a world where the map is no longer a guide to the territory, but a destination in itself. And in that destination, the true value of what we are trying to measure is lost forever.

The solution is not to stop measuring, but to diversify our measurements, to prioritize qualitative judgment, and to understand that no single number can capture the complexity of human endeavor. We must resist the tyranny of the metric. We must remember that when a measure becomes a target, it ceases to be a good measure. It is a warning that has been written in the blood of failed policies and wasted resources, yet we continue to ignore it. Perhaps the only way to break the cycle is to accept that some things cannot be measured, and that the things we can measure are not the only things that matter.

In the end, Goodhart's Law is a reminder of our own limitations. It is a testament to the fact that the world is too complex to be reduced to a spreadsheet. It is a call to humility in the face of our own desire for control. And it is a challenge to the next generation of policymakers, scientists, and leaders to find a way to govern without breaking the very tools they use to govern. The law is simple, but its application is profound. It is the story of how we try to capture the wind in a net, only to find that the wind has changed direction the moment we pulled the net tight.

The legacy of Charles Goodhart, Donald Campbell, and Jerome Ravetz is not just a list of academic papers. It is a warning that echoes through every organization, every government, and every individual who has ever been asked to perform to a number. It is the realization that the map is not the territory, and that when we make the map our goal, we lose the territory. And in a world that is increasingly driven by data, that is a loss we cannot afford to ignore.

The future of accountability may depend on our ability to learn this lesson. To build systems that are robust to gaming, to design metrics that are resilient to manipulation, and to recognize that the pursuit of a number is not the same as the pursuit of value. It is a difficult path, one that requires constant vigilance and a willingness to question the very tools we use to measure our success. But it is the only path that leads to a future where the metrics serve us, rather than the other way around. Goodhart's Law is not just a law of economics; it is a law of human nature. And until we learn to respect it, we will continue to be its victims.

The story of Goodhart's Law is still being written. Every time a new metric is introduced, every time a new target is set, the law asserts itself. It is a silent force, working in the background of every decision, shaping the outcome in ways we often do not see until it is too late. But now, armed with the knowledge of its history and its mechanism, we can begin to see it coming. We can recognize the signs. And perhaps, just perhaps, we can find a way to navigate the complex terrain of the modern world without getting lost in the numbers. The measure is not the target. The target is not the measure. And the truth lies somewhere in between, in the messy, unquantifiable reality of human life. It is there that the real work begins.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.