← Back to Library

Reading comprehension studies can mislead US about what works

The Meta-Analysis That Misses the Forest for the Trees

A newly published meta-analysis of 71 reading comprehension studies, led by Canadian teacher Nathaniel Hansford, took three and a half years to complete. It screened over 1,500 studies and re-ran analyses countless times. Education writer Natalie Wexler finds the whole enterprise of questionable value, and her critique exposes a fundamental tension in how reading research gets conducted, interpreted, and weaponized in curriculum debates.

The core problem, as Wexler sees it, is not just what Hansford found. It is what the underlying studies were capable of finding in the first place.

Reading comprehension studies can mislead US about what works

Short Studies, Long Payoffs

Wexler's sharpest point targets the mismatch between study duration and the time knowledge-building actually takes to produce measurable results. None of the knowledge-building studies in Hansford's database lasted more than 45 weeks. That is a serious constraint when the intervention in question requires students to accumulate broad academic knowledge over years, not weeks.

Students need to acquire a critical mass of general academic knowledge and vocabulary before they're equipped to understand texts on unfamiliar topics.

This is not a minor methodological quibble. It goes to the heart of whether short-duration studies can tell us anything meaningful about knowledge-building at all. Wexler points to two peer-reviewed studies, neither included in Hansford's database, that found significant positive effects on standardized measures, but only after three years of implementation.

Given their short duration, it would have been surprising if any of them had turned up a medium or large effect size on a standardized measure.

The omission of these longer studies is puzzling. Hansford told Wexler via Twitter that he "can't comment on any specific studies inclusion status," which is not exactly a satisfying explanation for a meta-analysis that claims to assess whether long-term implementation matters.

The Effect Size Shell Game

Wexler identifies another layer of the problem: how Hansford classifies what counts as a meaningful result. He used a framework similar to Cohen's d, which sets the bar for a "small" effect size at 0.2. Education researcher Matthew Kraft has argued that this standard is unrealistic for education interventions, where the vast majority of studies never reach that threshold.

Hansford dismisses one meta-analysis, which he says is often used to provide support for knowledge-building, as showing only "small" effect sizes on comprehension. But the study's authors characterize the effect sizes as "large."

The discrepancy comes down to which yardstick you choose. Under Kraft's framework, designed specifically for education research using standardized measures, an effect size of 0.20 qualifies as large. Under Cohen's general framework, it barely registers as small. The choice of framework is not neutral. It determines whether knowledge-building looks like a failure or a success.

Hansford argued on Twitter that Kraft's benchmarks apply only to large-scale independently funded randomized controlled trials on older students. Wexler notes that Kraft's own paper describes its scope more broadly, as covering "causal research that evaluates the effect of education interventions on standardized student achievement." That is a meaningful disagreement about the rules of the game, not a technicality.

Strategies Without Knowledge Are Empty

The meta-analysis found relatively strong evidence for reciprocal teaching, a method that incorporates four comprehension strategies: predicting, questioning, clarifying, and summarizing. Some commentators, like education writer David Didau, called this finding "inconvenient for advocates of 'knowledge-rich' approaches to education." Wexler pushes back hard.

The value of strategies is that they get students to pay attention to text they've read and ask questions about it. That can boost comprehension. But without a certain level of relevant knowledge, students won't be able to answer the questions they're asking, no matter how hard they try.

This is the central argument, and it is a strong one. Strategies are tools. Tools require material to work on. A student who knows nothing about the French Revolution cannot summarize a passage about it, no matter how many times she has practiced the summarizing strategy on other texts.

Readers with ample knowledge of the topic are likely to make inferences automatically, while those who lack such knowledge may find it impossible.

It is worth noting, though, that Wexler's framing sometimes risks understating the independent contribution of strategy instruction. Reciprocal teaching has a substantial evidence base precisely because it works across a range of knowledge levels. The question is not whether strategies help, which they clearly do in many contexts, but whether they help enough when knowledge is thin. That is a difference of degree, not of kind, and the research landscape is murkier than either camp tends to acknowledge.

The False Binary

Wexler is careful to avoid the either-or framing that plagues this debate. She has argued repeatedly that it is not about choosing between strategies and knowledge. The real question is what occupies the foreground of instruction.

It's about putting a particular text or topic in the foreground and bringing in whatever strategies, or skills or literacy standards, are appropriate to enable students to make meaning from it. That's different from the typical approach, which is to use a text on some random topic to try to teach a particular strategy.

This distinction matters enormously in practice. When strategy instruction dominates the literacy block, subjects like social studies and science get squeezed out. Wexler reports seeing this pattern repeatedly in American schools.

I've seen and heard of many instances of skill-and-strategy instruction being way overdone, but I haven't encountered reports of American kids getting too much knowledge-building.

The imbalance is real, and it explains why knowledge advocates have targeted strategy instruction so aggressively. Whether some have gone too far in that direction, effectively posting "No Strategies Allowed" signs as critic Harriet Janetos charges, is a fair question. Wexler concedes the point indirectly while defending the movement's broader logic.

What the Research Actually Needs

Wexler closes with a structural critique of reading comprehension research itself. The problem is not just this particular meta-analysis. It is that the entire research ecosystem is biased toward short, cheap studies of discrete interventions, which inherently favors strategy instruction over knowledge-building.

There will always be many more studies of strategy instruction than of knowledge-building, not because strategy instruction in the abstract is better, but because it's easier to see quick results, making it cheaper to study and more attractive to researchers.

What the field actually needs, she argues, are head-to-head comparisons of actual curricula lasting three years or more. Those studies are expensive and logistically difficult, which is precisely why they rarely happen. The result is a literature that systematically underrepresents the most promising interventions.

She also flags a significant gap in Hansford's analysis: the connection between writing instruction and reading comprehension. Existing research on this link is excluded from the meta-analysis entirely, despite evidence that explicit writing instruction is one of the most effective ways to build the complex syntax knowledge and content retention that reading comprehension requires.

Bottom Line

Wexler makes a persuasive case that Hansford's meta-analysis, however laboriously constructed, is built on a foundation of studies too short and too narrow to assess what knowledge-building can actually do. The omission of longer-term studies that did show transfer effects, combined with an effect size framework that sets unrealistically high bars for education interventions, tilts the playing field against the very approach that most needs long-term evaluation. Her core insight is sound: reading comprehension is built over years through accumulated knowledge, and any research program that tries to measure it in weeks is measuring the wrong thing. The field needs better studies, not more meta-analyses of flawed ones.

Deep Dives

Explore these related deep dives:

  • Reading comprehension

    The main topic of the article discusses meta-analyses of reading comprehension studies and debates about what approaches work best.

  • Reciprocal teaching

    Specifically mentioned in the excerpt as incorporating four comprehension strategies, which is central to the debate between strategy instruction and knowledge-building approaches.

  • Meta-analysis

    The methodology used by Nathaniel Hansford to analyze 71 studies; understanding this helps evaluate the validity of the research discussed in the article.

Sources

Reading comprehension studies can mislead US about what works

by Natalie Wexler · Natalie Wexler · Read full article

Nathaniel Hansford, a Canadian teacher and education blogger, is the lead author of a newly published meta-analysis of 71 studies of reading comprehension. It’s the result, he writes, of three and a half years spent screening over 1,500 studies and re-running analyses countless times—which involved “learning far more statistics than [he] ever planned to.” As far as I can tell, Hansford doesn’t have a formal background in statistics or mathematics, so this is an impressive achievement.

But the meta-analysis fails to provide much useful guidance. Do the benefits of comprehension strategy instruction plateau after just a few hours, as some have argued? Maybe yes, maybe no. Ditto, at least according to Hansford, for whether the benefits of building students’ knowledge increase the longer the process continues.

In any event, given that the knowledge-building interventions in the studies didn’t involve coherent knowledge-building curricula—they just incorporated some science or social studies content into literacy instruction—Hansford cautions that “conclusions drawn here should not be interpreted as direct evidence for or against knowledge-building models.”

Still, the meta-analysis contains a strong undercurrent of skepticism about the benefits of knowledge-building for reading comprehension, and it’s something other commentators have picked up on.

On his Substack, David Didau has described Hansford’s finding of relatively strong evidence for reciprocal teaching—which incorporates four comprehension strategies—as “inconvenient for advocates of ‘knowledge-rich’ approaches to education.” Harriet Janetos, another education Substacker, took the release of the meta-analysis as an occasion to criticize the “knowledge-building movement” for “setting up No Strategies Allowed signs” rather than identifying the strategies that are most likely to be helpful.

Speaking as an advocate of knowledge-building, I don’t consider the findings in Hansford’s meta-analysis “inconvenient.” Rather, I find the whole endeavor of questionable value, partly because of the faulty premises of the underlying studies and partly because I believe the meta-analysis understates the evidence for knowledge-building.

I also don’t think it’s fair to say that the knowledge-building movement has set up “No Strategies Allowed” signs. Of course, there’s no official leader of the movement and consequently no single disciplined message. Some knowledge advocates may have argued, or appeared to argue, that all strategy instruction should be verboten.

In any event, it’s understandable that knowledge advocates have targeted skill-and-strategy instruction, given that it routinely displaces subjects like social studies and science from the curriculum. I’ve seen and heard of many instances of skill-and-strategy instruction being way overdone, but I ...