The Origin Story
This piece stands out because it tells the story most people don't know: where IQ actually came from, and what it's supposed to measure. Derek Muller does something few creators do — he takes us inside an official IQ test while explaining the century-long debate over what intelligence means.
Muller writes that "the correlations weren't perfect" — this is the crucial detail that makes Spearman's discovery stick. He explains that after finding students who did well in math also did well in English, "Spearman proposed that each person has some level of general intelligence what he called the G Factor." The G Factor was meant to capture "how quickly students could learn new material, recognize patterns and think critically regardless of the subject matter."
This is the intellectual heart of the piece. The author makes a compelling case that IQ isn't just a number — it's a theoretical construct designed to explain why scores across subjects correlate. What makes this argument land well is how Muller builds it: he starts with school grades, then expands into mental age, then shows how modern tests normalize scoring so "the mean was 100 and the standard deviation was 15."
The Predictor Problem
The most surprising evidence Muller brings isn't about test design — it's about what IQ actually predicts. He cites a 2005 meta-analysis showing "a correlation of .33 between IQ and brain size" — meaning high IQ is literally big brain. But wait, there's more: "their performance on an IQ test when they were 11 correlated with their performance 5 years later on the GCSE" at about .8 — "that's an extremely high correlation." This means two-thirds of variation in school exam scores could be predicted by IQ tests taken five years prior.
The income data is weaker, though Muller doesn't hide this. He cites a meta-analysis finding "the correlation between IQ and income to be .21" which means only 4.4% of variance explained. His paraphrase captures the key insight: "economically intelligence is not necessarily that highly rewarded." This is the kind of honest admission that makes the piece trustworthy — he's not overselling the predictive power.
But the mortality finding is stark: for every 15-point increase on the IQ test, you are "27% more likely to still be alive at age 76."
IQ tests all the questions are completed under time pressure — you may have only around 10 to 30 seconds per question.
The Dark History
Here's where Muller gets genuinely important: IQ has a dark history, and he doesn't gloss over it. "In France Benet believed intelligence could be improved through education" — the original test was designed so struggling students could get help to catch up. But "in the US the modified test was given to adults to rank them by intelligence." This shift from helping to ranking is the crux of the critique.
Muller notes that "researchers like Spearman believed that g was unchangeable" — that whatever general intelligence you were born with, you would have for the rest of your life. Critics might note this oversimplifies the nature of cognitive development — modern neuroscience shows neuroplasticity allows brains to change well into old age.
Bottom Line
Derek Muller's strongest contribution is making the abstract concrete: he takes an IQ test himself, walks through Raven's Progressive Matrices, and explains correlation coefficients with actual numbers. His biggest vulnerability is that the piece sometimes prioritizes breadth over depth — the history of IQ testing deserves a full episode, not a sub-section. The reader should watch for this: the debate over whether intelligence is fixed or trainable remains unresolved, and it's far more contested than this piece suggests.