← Back to Library

Huang's law

Forget the familiar narrative that computing progress has stalled; Babbage argues we are witnessing a supercharged acceleration that dwarfs the industry's historic benchmark. While the world fixates on the slowing of transistor density, this piece reveals a hidden engine of growth driven by architecture and software, not just physics. For the busy professional, the takeaway is stark: the rules of the game have changed, and the old metrics no longer tell the whole story.

The Myth of the Natural Law

Babbage begins by dismantling the romanticized view of Gordon Moore's famous observation. The author writes, "Moore's law is really about economics. My prediction was about the future direction of the semiconductor industry, and I have found that the industry is best understood through some of its underlying economics." This reframing is crucial. It shifts the conversation from inevitable physical laws to a self-fulfilling prophecy of investment and market demand. Babbage correctly identifies that Moore's Law was never a rule of nature, but rather a "virtuous cycle" where sophisticated devices created bigger markets, which in turn funded more research.

Huang's law

However, the piece makes a sharp distinction when introducing the new contender. Unlike its predecessor, the new phenomenon is explicitly about speed. Babbage notes, "In contrast to Moore's Law - by definition... Huang's Law definitely is about performance." This is a vital clarification for investors and engineers alike. While Moore's Law tracked component density, this new metric tracks the raw output available for artificial intelligence tasks. The author points out that while the original 2018 keynote didn't explicitly name machine learning, "in 2025 it's ML performance that gets the most focus."

Moore's Law isn't about performance. We can see from the graph below that one common measure of performance (Single Threaded - SpecINT) has fallen behind the exponential trend of Moore's Law for more than a decade now.

Critics might argue that isolating hardware performance ignores the massive gains coming from algorithmic efficiency. Babbage anticipates this, citing OpenAI data showing that the compute required for early models has dropped significantly due to software improvements. The piece wisely separates "compute scaling" from "algorithmic progress," acknowledging that while software gets smarter, the hardware foundation is still the bottleneck.

Deconstructing the Acceleration

The core of Babbage's analysis lies in dissecting how this speed-up is achieved. It is not magic; it is a specific combination of engineering choices. The author turns to Nvidia's Chief Scientist Bill Dally, who breaks the 1,000-fold improvement over 11 years into four distinct categories. Babbage writes, "Number Representation... has delivered 16x" and "Complex Instructions... 12x." These are not incremental tweaks; they are fundamental shifts in how data is processed.

The most surprising revelation is the minimal role of traditional chip shrinking. Babbage highlights a startling statistic: "only 1/40 of 1000-fold improvement here has been categorised as due to process improvements." This suggests that the industry has successfully pivoted away from its decades-long obsession with making transistors smaller. Instead, the gains come from "reduction in precision," where hardware designers realized that AI algorithms don't need the full accuracy of 32-bit calculations. By moving to lower precision formats like FP16, engineers could "cram more of these units onto each chip" and run them faster.

It turns out, you don't want RISC. You want complex instructions to do a lot of work to amortize the cost of that instruction.

Babbage also emphasizes the role of "sparsity," or the ability to skip calculations that don't matter. The piece explains that modern architectures can identify zeros in a matrix and ignore them, offering "up to 2x the maximum throughput of dense math without sacrificing accuracy." This is a sophisticated argument that challenges the assumption that more computing power simply means bigger chips. It means smarter chips that know when to stop working.

The Skeptic's View and the Verdict

Despite the compelling data, the piece does not shy away from the skeptics. Babbage cites Joel Hruska, who argues that declaring a "law" after only a few years is premature. Hruska's three-pronged attack—that the law is an illusion dependent on Moore's Law, that the timeline is too short, and that low-hanging fruit will run out—is given fair weight. Babbage acknowledges this tension, noting that "it's also possible that, like Moore's Law before it, Huang's Law will run out of steam."

Yet, the author concludes that the validity of the "law" matters less than the strategic insight it provides. Even if the exponential curve flattens in a decade, the window of opportunity is sufficient to revolutionize industries. As Babbage puts it, "That could happen within a decade... But it could enable much in that relatively short time, from driverless cars to factories and homes that sense and respond to their environments." The argument holds that we should treat this not as a permanent natural law, but as a temporary, hyper-accelerated phase of development that demands immediate attention.

Even if you don't buy into the 'law' itself, serious consideration of the thinking behind it is valuable.

Bottom Line

Babbage's strongest contribution is demystifying the source of AI's speed, proving it stems from architectural innovation and software-hardware co-design rather than just transistor density. The piece's biggest vulnerability lies in its reliance on proprietary data from Nvidia, which may overstate the universality of these gains across the entire semiconductor landscape. Readers should watch for whether competitors can replicate these architectural tricks or if this acceleration remains a niche advantage for a single vendor.

Sources

Huang's law

‘There’s a new law going on and I think this is the future of computing’.

… the speed-up that we created in the last five years is 25 times while Moore's Law is ten times in five years. Moore's law the miracle of laws, the law that has enabled just about every single industry and progress of science and the progress of society, was compounded over time 10x every five years. Our GPUs have accelerated these molecular dynamics simulations 25x in the last five years. There's a new law going on it's a supercharged law. There's a new law going on and I think this is the future of computing …

Nvidia CEO Jensen Huang first referred to a ‘new law’ during his keynote presentation at Nvidia’s GTC conference in 2018.

It didn’t take long for this ‘new law’ to get a name. Tekla S Perry, writing in IEEE Spectrum, reported on the event with:

Move Over, Moore’s Law: Make Way for Huang’s Law

Graphics processors are on a supercharged development path that eclipses Moore’s Law, says Nvidia’s Jensen Huang …

Huang, who is CEO of Nvidia, didn’t call it Huang’s Law; I’m guessing he’ll leave that to others. After all, Gordon Moore wasn’t the one who gave Moore’s Law its now-famous moniker. …

But Huang did make sure nobody attending GTC missed the memo.

The name was picked up by others. Christopher Mim’s in the Wall Street Journal wrote in 2020 that:

Huang’s Law Is the New Moore’s Law, …

But a different law, potentially no less consequential for computing’s next half century, has arisen.

I call it Huang’s Law, after Nvidia Corp. chief executive and co-founder Jensen Huang. It describes how the silicon chips that power artificial intelligence more than double in performance every two years.

Mim’s article quantifies the improvement seen over the previous eight years, based on data supplied by Nvidia’s chief scientist Bill Dally, and - unlike Huang’s keynote - specifically highlights performance on AI related calculations:

Between November 2012 and this May, performance of Nvidia’s chips increased 317 times for an important class of AI calculations, says Bill Dally, chief scientist and senior vice president of research at Nvidia. On average, in other words, the performance of these chips more than doubled every year, a rate of progress that makes Moore’s Law pale in comparison.

The end of the article recognises that Huang’s Law ...