← Back to Library

How medical research gets it wrong

In an era where a single social media post can derail public health policy, Rohin Francis offers a masterclass in why we cannot trust our gut instincts when evaluating medical claims. This piece is not merely a list of errors; it is a forensic dismantling of the systemic flaws that turn legitimate science into dangerous noise. Francis argues that the problem isn't a lack of data, but a pervasive inability to distinguish between correlation and causation, a failure that costs lives and erodes trust in medicine itself.

The Architecture of Error

Francis opens by defining the enemy: bias, which he describes as a "systematic error in medical research that leads to an inaccurate result." He posits that while the best studies work tirelessly to minimize these errors, bad studies often ignore them entirely, yet their findings frequently bubble up to the front page of major news outlets. This creates a feedback loop where "millions of people have been given misleading or just completely incorrect medical information." The urgency here is palpable; Francis suggests that confirmation bias is the engine driving this "car crash of a nightmare world," fueling everything from anti-vaccine movements to climate denialism.

How medical research gets it wrong

The author's framing of confirmation bias is particularly sharp. He notes that it is not just a personal failing but a structural one, affecting "how trials are commissioned, how they are designed, how they are funded and how they are interpreted." This moves the blame from the individual reader to the very machinery of scientific inquiry. Critics might argue that Francis places too much weight on individual cognitive failures rather than institutional incentives, but his point stands: if the data collection is skewed by what researchers want to find, the result is inevitable.

The Trap of Representation

The commentary then shifts to selection bias, a flaw where the study population does not reflect the real world. Francis uses a humorous but illustrative analogy of studying the popularity of deep-fried Mars bars by calling random numbers from a phone book, only to accidentally sample exclusively from Scotland. He translates this to medicine with a sobering reality: "about three-quarters of medical trial participants through the years have been male but yet the findings are applied to everyone." This is a critical vulnerability in modern pharmacology. If the data comes from a narrow slice of humanity, applying those results to the whole is not just imprecise; it is potentially lethal.

Francis highlights the specific danger to elderly patients, noting that trials often focus on those aged 65 to 70, leaving a massive gap in knowledge for the 90-year-old demographic. He writes, "it's almost impossible to avoid selection bias in medical trials because the very patients that sign up for medical trials are probably not reflective of the general population anyway." This admission is uncomfortable but necessary. It forces the reader to question the applicability of any headline claiming a "cure" without knowing who was actually tested.

Correlation is not causation. Correlation is not causation. Correlation is not causation.

The Illusion of Cause

Perhaps the most dangerous error Francis dissects is confounding, the failure to account for hidden variables that influence outcomes. He revisits the classic 1970s case of hormone replacement therapy (HRT), where observational studies suggested the drug reduced heart disease. In reality, women taking HRT were simply wealthier and healthier to begin with. "The observational study didn't take into account confounders," Francis explains, leading to national guidelines that ultimately caused harm. The lesson is stark: you cannot infer causation from an observational study, no matter how compelling the pattern appears.

He extends this logic to modern wellness culture, mocking the idea that a bizarre product like a "Mongolian mango and Sudanese saffron enema" could cause weight loss. The reality, as he points out, is that the people buying such products are already "health-conscious, healthy, affluent, active and of relevance to this particular study slim." This section effectively bridges the gap between academic statistics and the pseudoscience peddled by influencers. The argument holds up because it relies on a fundamental statistical principle that is often ignored in favor of a good story.

The Silence of Negative Results

Francis also tackles the invisible problem of publication bias, where journals prefer to print positive results over negative ones. He argues that this skews the entire medical landscape, creating a false impression of efficacy. "Nobody wants to read a study saying that we look to the effects of standing on one foot and found that had no effect on impotence because it's not very interesting," he writes. Yet, this silence is where the truth often hides. If only successful trials are published, the medical community is operating on a distorted dataset, believing treatments work when they might not.

He further explores detection and performance bias, where patients in the treatment group receive better care or more scrutiny than those in the control group. While double-blinding is the solution, Francis notes it is not always possible, especially in surgical trials. This leaves room for "observer bias," where a doctor's belief in a treatment can inadvertently influence the patient's reported outcome, particularly with "soft endpoints" like happiness or pain levels. The distinction between hard endpoints (death, heart attacks) and soft ones is crucial for the busy reader to understand when evaluating news claims.

The Randomness of Life

Finally, Francis addresses regression to the mean, a concept often misunderstood even by medical professionals. He uses the superstition of saying "quiet" on a busy shift to illustrate how random fluctuations are mistaken for cause and effect. When a shift is unusually quiet, it is statistically likely to return to the average, regardless of any verbal intervention. "Regression to the mean dictates that the next hour is likely to be closer to that 50 the average," he explains. This is a vital reminder that not every change is the result of an intervention; sometimes, things just return to normal.

Bottom Line

Rohin Francis delivers a compelling, accessible critique of how medical research is misinterpreted, successfully translating complex statistical concepts into a warning against intellectual laziness. His strongest asset is the relentless focus on the gap between what studies show and what they actually mean, particularly regarding confounding variables and selection bias. The piece's only vulnerability is its reliance on the reader's willingness to accept that their own intuition is flawed, a hard pill to swallow in a world that rewards certainty. For the busy professional, the takeaway is clear: treat every medical headline with skepticism until you know who was studied, what was controlled, and what was left out.

Sources

How medical research gets it wrong

by Rohin Francis · Medlife Crisis · Watch video

I'm on nights once again to fend off the fatigue I thought I'd set myself a challenge one of my main objectives would be if it looks like a quack series is to help people try to think critically about medical claims that they hear in the press or read online the stuff blueberries give you a strong heart turmeric cures arthritis and caffeine both cures and causes cancer the so called schrödinger's carcinogen there are many ways that medical research could be misinterpreted and people can be misled I will deal with as many of those as I can on this channel but for this video I just wanted to concentrate on bias so the challenge I've set myself for this video is to name as many types of bias as I can as I go about my work tonight but what actually is bias well bias is a systematic ror in medical research that leads to an inaccurate result this could be something like a problem with the design of a study or it could be incorrect interpretation of the results bias is essentially ubiquitous meaning it's impossible to avoid completely but this doesn't mean that we need to check out all medical research because the best research papers and studies take great pains to minimize the amount of bias but bad studies don't even bother the problem is these data get published and before it they've been reprinted in the Daily Mail or Mercola or some other website that pedals nonsense and millions of people have been given misleading or just completely incorrect medical information so let's go I'll kick off with a big one confirmation bias and the reason I'm starting with this is because I think it's one of the main reasons we live in this car crash of a nightmare world we currently find ourselves inheriting because it's the reason that people like flat-earthers anti-vaxxers and climate change denier lists all exists confirmation biases when you look for information or data that supports your existing hypothesis and you reject data that disagree with you for example forget those three groups they're nutters but politics people are more inclined to believe news articles that agree with their viewpoint and ignore or be more critical of ones that disagree with them this is relevant to medical research because it affects how trials are commissioned how they're ...