← Back to Library

But what is quantum computing?

Tube, with millions of subscribers hungry for explanations that actually make sense. His piece stands out because it doesn't just explain quantum computing — it systematically dismantles the popular misunderstanding that has infected nearly every pop-science article on the topic. The quiz he gave to 100,000 respondents reveals something important: even people who think they understand quantum computing get this fundamentally wrong.

The Misconception Problem

Sanderson writes with precision about where the standard summary goes wrong. "In a classical computer, data is stored with bits... But in a quantum computer, you are able to represent every possible sequence of bits of some fixed length all at once in one big thing known as a superposition." This sounds correct, but it creates a dangerous implication: that quantum computers can do "whatever a classical computer would do but to all of these sequences in parallel."

But what is quantum computing?

This is the misconception he targets. And it's effective because he's not just pointing out an error — he's proving it with data. The quiz results show that the most common answer was O(1), and "this is wrong." He's not being condescending; he's diagnosing a systematic misunderstanding that's propagated through countless articles, videos, and even academic summaries.

Sanderson then delivers what might be the most important sentence in the entire piece: "In 1994, it was proven that a quantum computer could not possibly do any better than O of square root of N on this task." This is his corrective. The square root speedup is real — but it's not the exponential miracle cure that headlines have promised.

What Quantum Computers Actually Do

The core of his argument centers on what Grover's Algorithm actually achieves. "Searching through a bag of a million options takes on the order of a thousand steps. A bag of a trillion options takes on the order of a million." This is O(√N) in action — a genuine speedup, but not the kind that makes quantum computers sound like science fiction magic.

Sanderson builds this explanation carefully, starting with what he calls "the middle layer of abstraction" — probability distributions over bit strings. He's explicit about why he's postponing the underlying physics: "We're going to postpone all of the underlying physics for now, which is a little bit like teaching computer science without discussing hardware." This is smart because it keeps readers from getting distracted by experimental results before they understand the computational model.

The state vector explanation is where he earns his keep. He introduces one of quantum computing's oddest features: "it is perfectly valid for the values in this state vector to be negative. And at first you might think that has no real impact since flipping the sign doesn't change the square and therefore all the probabilities stay the same." The sign flipping, as he notes later, "plays a very central role in Grover's algorithm here."

This is where his expertise shows. He's not just explaining quantum computing; he's showing how the mathematical structure enables the algorithm itself.

A quantum computer cannot possibly do any better than O of square root of N on this task.

The NP Problem Framing

Sanderson makes a crucial connection that many explanations miss: this problem isn't contrived. "This describes an enormous class of problems in computer science known as NP problems." He's pointing out that Grover's Algorithm applies to a massive category of computational challenges, not just artificial test cases. This is what makes the algorithm interesting — it's general-purpose within its domain.

The framing works because he acknowledges something important: "while big O runtimes are often a lot less important than other practical considerations," the fact that any NP problem can be sped up at all is itself remarkable. He's giving credit where it's due without overselling.

Where the Argument Weakens

Critics might note that Sanderson's O(√N) explanation, while accurate, understates how practically significant this speedup actually is in real-world quantum computing applications. A factor of √N improvement for large N is genuinely transformative — not exponential, but meaningful. The piece could have leaned harder into why this still matters rather than framing it as something "not as earth-shattering" as an exponential speedup.

Also, his dismissal of O(log N) and O(1) answers as "wrong" lacks nuance. These aren't wrong in the sense that readers gave a mistaken answer — they're wrong because the problem is unsolvable within those bounds under the quantum computational model. The distinction matters for a precise understanding, but Sanderson doesn't fully explore why these answers are mathematically impossible rather than just incorrect.

Bottom Line

Sanderson's strongest move is using the quiz to expose a widespread misunderstanding, then carefully rebuilding the correct mental model from scratch. His biggest vulnerability is that he frames the square root speedup as somewhat disappointing without making clear how transformative even this modest improvement actually is for certain computational problems. The piece succeeds because it corrects misconceptions with data rather than just asserting them — and it's worth 15 minutes because it builds intuition through a step-by-step geometric process instead of relying on analogies that led to the confusion in the first place.

Deep Dives

Explore these related deep dives:

Sources

But what is quantum computing?

by Grant Sanderson · · Watch video

A lot of pop science outlets give a certain summary of quantum computing that I can almost guarantee leads to misconceptions. The summary goes something like this. In a classical computer, data is stored with bits, some sequence of ones and zeros. But in a quantum computer, you are able to represent every possible sequence of bits of some fixed length all at once in one big thing known as a superp position.

And sometimes the implication of these summaries is that quantum computers can be faster by basically doing whatever a classical computer would do but to all of these sequences in parallel. Now this does gesture at something that's kind of true. But let me see if I can prove to you why I think this leads to misconceptions. And I'll prove it using a quiz.

To set it up, I want you to imagine that I have a mystery function and I tell you that there's a certain secret number among all the numbers from 0 up to n minus one where if you plug that value into my function, it returns true. But if you were to plug in any other value, it returns false. And let's say you can't look at the inards of the function to learn anything about it. The only thing you're allowed to do with it is just try it out on numbers.

The warm-up question is, how many times on average would you have to apply this mystery function in order to find the secret key? Well, if the setup was in an ordinary classical computer, there's really nothing better that you can do than guess and check. You go through all the numbers, and maybe you're lucky and find it early. Maybe you're unlucky and it doesn't come up until later.

But on average, with a list of n possibilities, it takes 1/2 of n attempts to find the key. Now, in computer science, people care about how runtimes scale. If you had a list 10 times as big, how much longer would it take? And computer scientists have a way of categorizing run times.

They would call this one O of N where that big O communicates that maybe there's some constants like the 1/2 or maybe some factors that grow slower than N. But the factor of N is what explains how quickly it scales as N ...