Forget the familiar narrative that computing progress has stalled; Babbage argues we are witnessing a supercharged acceleration that dwarfs the industry's historic benchmark. While the world fixates on the slowing of transistor density, this piece reveals a hidden engine of growth driven by architecture and software, not just physics. For the busy professional, the takeaway is stark: the rules of the game have changed, and the old metrics no longer tell the whole story.
The Myth of the Natural Law
Babbage begins by dismantling the romanticized view of Gordon Moore's famous observation. The author writes, "Moore's law is really about economics. My prediction was about the future direction of the semiconductor industry, and I have found that the industry is best understood through some of its underlying economics." This reframing is crucial. It shifts the conversation from inevitable physical laws to a self-fulfilling prophecy of investment and market demand. Babbage correctly identifies that Moore's Law was never a rule of nature, but rather a "virtuous cycle" where sophisticated devices created bigger markets, which in turn funded more research.
However, the piece makes a sharp distinction when introducing the new contender. Unlike its predecessor, the new phenomenon is explicitly about speed. Babbage notes, "In contrast to Moore's Law - by definition... Huang's Law definitely is about performance." This is a vital clarification for investors and engineers alike. While Moore's Law tracked component density, this new metric tracks the raw output available for artificial intelligence tasks. The author points out that while the original 2018 keynote didn't explicitly name machine learning, "in 2025 it's ML performance that gets the most focus."
Moore's Law isn't about performance. We can see from the graph below that one common measure of performance (Single Threaded - SpecINT) has fallen behind the exponential trend of Moore's Law for more than a decade now.
Critics might argue that isolating hardware performance ignores the massive gains coming from algorithmic efficiency. Babbage anticipates this, citing OpenAI data showing that the compute required for early models has dropped significantly due to software improvements. The piece wisely separates "compute scaling" from "algorithmic progress," acknowledging that while software gets smarter, the hardware foundation is still the bottleneck.
Deconstructing the Acceleration
The core of Babbage's analysis lies in dissecting how this speed-up is achieved. It is not magic; it is a specific combination of engineering choices. The author turns to Nvidia's Chief Scientist Bill Dally, who breaks the 1,000-fold improvement over 11 years into four distinct categories. Babbage writes, "Number Representation... has delivered 16x" and "Complex Instructions... 12x." These are not incremental tweaks; they are fundamental shifts in how data is processed.
The most surprising revelation is the minimal role of traditional chip shrinking. Babbage highlights a startling statistic: "only 1/40 of 1000-fold improvement here has been categorised as due to process improvements." This suggests that the industry has successfully pivoted away from its decades-long obsession with making transistors smaller. Instead, the gains come from "reduction in precision," where hardware designers realized that AI algorithms don't need the full accuracy of 32-bit calculations. By moving to lower precision formats like FP16, engineers could "cram more of these units onto each chip" and run them faster.
It turns out, you don't want RISC. You want complex instructions to do a lot of work to amortize the cost of that instruction.
Babbage also emphasizes the role of "sparsity," or the ability to skip calculations that don't matter. The piece explains that modern architectures can identify zeros in a matrix and ignore them, offering "up to 2x the maximum throughput of dense math without sacrificing accuracy." This is a sophisticated argument that challenges the assumption that more computing power simply means bigger chips. It means smarter chips that know when to stop working.
The Skeptic's View and the Verdict
Despite the compelling data, the piece does not shy away from the skeptics. Babbage cites Joel Hruska, who argues that declaring a "law" after only a few years is premature. Hruska's three-pronged attack—that the law is an illusion dependent on Moore's Law, that the timeline is too short, and that low-hanging fruit will run out—is given fair weight. Babbage acknowledges this tension, noting that "it's also possible that, like Moore's Law before it, Huang's Law will run out of steam."
Yet, the author concludes that the validity of the "law" matters less than the strategic insight it provides. Even if the exponential curve flattens in a decade, the window of opportunity is sufficient to revolutionize industries. As Babbage puts it, "That could happen within a decade... But it could enable much in that relatively short time, from driverless cars to factories and homes that sense and respond to their environments." The argument holds that we should treat this not as a permanent natural law, but as a temporary, hyper-accelerated phase of development that demands immediate attention.
Even if you don't buy into the 'law' itself, serious consideration of the thinking behind it is valuable.
Bottom Line
Babbage's strongest contribution is demystifying the source of AI's speed, proving it stems from architectural innovation and software-hardware co-design rather than just transistor density. The piece's biggest vulnerability lies in its reliance on proprietary data from Nvidia, which may overstate the universality of these gains across the entire semiconductor landscape. Readers should watch for whether competitors can replicate these architectural tricks or if this acceleration remains a niche advantage for a single vendor.