← Back to Library

Scientific research has big problems, and it's getting worse

Sabine Hossenfelder delivers a scathing indictment not of scientific fraud, but of the system that rewards uselessness. While most observers focus on the rare cases of outright lying, she argues that the true crisis is a broken incentive structure that forces honest researchers to produce "mathematical fiction" just to survive. For anyone who relies on science to guide policy or investment, this distinction is critical: the problem isn't that scientists are lying, it's that they are playing a game where the winning move is to say nothing of value.

The Hierarchy of Failure

Hossenfelder begins by categorizing the rot in academic research, starting with the most visible but least damaging issue: individual misconduct. She acknowledges that cases like the Harvard survey fraud or the superconductor deception by Ranga Das are terrible, yet she insists they are statistical outliers. "The most visible problems in scientific research are in some sense the least important ones," she writes. This framing is effective because it immediately shifts the reader's attention from the sensational headlines to the structural rot underneath. It suggests that policing individual bad actors is a distraction from the real disease.

Scientific research has big problems, and it's getting worse

She then moves to a more insidious threat: organized scams. These are "paper mills" where authorships and citations are sold, often utilizing AI to generate fake data and images. Hossenfelder notes that while this used to be concentrated in specific regions, it is now spreading globally. "We'll likely see more of this with AI becoming better," she warns. This is a sobering prediction, as the sophistication of these scams threatens to flood the literature with noise that is indistinguishable from signal to automated tools. However, she quickly pivots to her main thesis: these scams are merely symptoms of a deeper economic pressure.

The winning strategy in science is to be useless.

This stark declaration cuts through the usual academic platitudes. Hossenfelder argues that researchers aren't necessarily trying to be useless; they are responding rationally to a system that rewards citation counts over societal impact. She paraphrases the logic of the modern academic: create "useless garbage that the public doesn't understand or doesn't care about" but that satisfies the narrow criteria of peer reviewers. This creates a feedback loop where the only way to get funding is to produce papers that look like science but lack substance. Critics might argue that she underestimates the number of researchers who are genuinely trying to push boundaries, but her point holds that the system filters out those risks in favor of safe, incremental, and often trivial output.

The Economics of Stagnation

The core of Hossenfelder's argument is that the current model has turned science into a "planned economy" where researchers chase funding trends rather than truth. She cites the pressure to publish at earlier stages in a career, creating a "race to the bottom." "Scientists come to think of useless paper production as a necessary evil on the way to a breakthrough that never happens," she observes. This is a devastating critique of the tenure track, suggesting that the very mechanism designed to identify talent is actually training scientists to be mediocre.

She draws on the work of economist Paula Stephan to highlight how the system exploits early-career researchers. The current rubric has shifted from "publish or perish" to "funding or famine," a phrase Hossenfelder attributes to Stanford professor Steven Quake. This economic reality discourages risk-taking. As Nobel laureate Roger Kornberg is quoted, "if the work that you propose to do isn't virtually certain of success, then it won't be funded." This creates a paradox where the only way to get funded is to promise results that are already known, effectively freezing progress. The argument is compelling because it explains why scientific progress has slowed despite record levels of spending.

Hossenfelder also points out the lack of self-correction in various fields. In psychology, flawed statistical methods persisted for decades because fixing them would have made publishing harder. "Why didn't psychologists and sociologists do anything about it? because that would have been effort and that would have made it more difficult for them to publish papers," she writes. This admission of collective inertia is rare in scientific discourse. It suggests that the community values its own comfort and career security over the integrity of its findings. While some might argue that science eventually self-corrects, Hossenfelder's evidence suggests that without external pressure, the correction never comes.

The Trust Deficit

Perhaps her most controversial claim is a direct challenge to the public's trust in the scientific establishment. She argues that the top 0.1% of scientists, who are insulated from the worst pressures, cannot speak for the 99.9% who are grinding in the system. "I don't trust scientists because I know how the system skews their interests," she states bluntly. This is a bold move for a science communicator, as it risks alienating the very audience she hopes to inform. However, she justifies it by arguing that blind trust is dangerous when the incentives are misaligned.

She contrasts the self-reflection of psychologists with the stagnation in physics, where "mathematical gymnastics" have replaced empirical grounding. The lack of consequences for making wrong predictions for decades means that the community has no mechanism to purge bad ideas. "If a community ends up making wrong predictions for decades, they should see consequences," she argues, calling for deliberate measures to prevent economic pressure from dictating research directions. This is a call for structural reform, not just moral suasion.

The system that has evolved discourages faculty from pursuing research with uncertain outcomes.

This quote encapsulates the tragedy of the modern research landscape. Hossenfelder suggests that the solution lies in changing the rules of the game, not just the players. She notes that while many have proposed fixes, "nothing has changed." The inertia is immense, and the people most invested in the status quo are the ones with the most to lose from reform.

Bottom Line

Hossenfelder's most powerful contribution is her distinction between individual fraud and systemic uselessness, arguing that the latter is far more damaging because it is rewarded by the system. Her biggest vulnerability is her sweeping generalization about the entire scientific community, which may overlook the quiet heroes working against the grain. Readers should watch for whether the proposed structural reforms can gain traction in a political climate that demands scientific certainty but refuses to fund the risky work that actually produces it.

Sources

Scientific research has big problems, and it's getting worse

by Sabine Hossenfelder · Sabine Hossenfelder · Watch video

I've had a lot of conversations recently about what's going wrong with scientific research, and that's a good thing. It's good we're talking about it, though I'm a little surprised it turned out to be so controversial. But this made me realize that I've confused myself and potentially everybody else by mixing together a bunch of different things. The most visible problems in scientific research are in some sense the least important ones.

misconduct and fraud. These cases make a lot of headlines, but they're rare. Like we had this honesty researcher at Harvard who was accused of having faked survey responses. We have Ranga Das who was accused of having faked superconductor measurements and several other prominent examples.

And yes, this is terrible, but you'll find some rotten eggs in any profession. There are circumstances that may make this more likely. And I wonder whether science has such circumstances, but ultimately I think there's no way to avoid it. The second problem with scientific research is far less visible.

It's organized scams. These are becoming an increasing problem. These scams include so-called paper mills that are networks of people who sell off paper authorships or citations for money, but also just networks of pseudocientists who crank out fake papers with fake data and fake images. The methods these people are using are becoming more and more sophisticated and include things like planting fake papers online to generate profiles of imaginary researchers, letting AI write their papers, and generating images to try and attract citations by quoting themselves on Wikipedia.

This used to be a thing which happened predominantly in some eastern countries, notably China and India, but it's been spreading west in the past years with cases showing up in Europe and the Americas. We'll likely see more of this with AI becoming better. And this is a growing problem, but it's not the major problem I worry about, at least not yet. It's however indicative of the much bigger problem because you can ask and I think you should ask why do people do this?

Why do people buy authorships and citations to pretend being scientists? What's the point? The answer is quite simple. Because it's a good investment.

They spend money on fake papers with fake citations and then they can make more money by using this fake research to get grants or find a well-paid job. ...