← Back to Library

Nvidia q3 earnings

Most market commentary treats Nvidia's latest earnings as a simple binary: bubble or boom. Chipstrat cuts through that noise with a more nuanced, structural argument: Nvidia is not the leading indicator of an AI bubble, but rather the trailing one. The piece makes a compelling case that the long lead times of semiconductor manufacturing mean that today's record sales are merely the fulfillment of orders placed months ago, rendering the current print a lagging metric for any future slowdown. For busy investors, this distinction is critical—it suggests that a sudden market correction won't be signaled by a single bad earnings report, but will instead be visible much earlier in the capital expenditure plans of cloud giants.

The Lagging Indicator Thesis

The core of the article's argument rests on the mechanics of the supply chain. Chipstrat explains that demand is locked in long before the quarter begins, with system integrators like Dell and SuperMicro building racks based on commitments from hyperscalers like Microsoft and Google. The piece notes, "This quarter's demand was locked in months ago, and Nvidia's only job was to ship the hardware already spoken for." This reframing is vital because it shifts the focus from Nvidia's immediate performance to the health of the entities upstream. The editors argue that if a bubble were forming, it would first appear as a flattening of orders at the system integrator level, not a sudden drop in Nvidia's revenue.

Nvidia q3 earnings

This logic holds up well against the backdrop of the current market anxiety. Critics might point out that in a true panic, buyers could theoretically cancel orders, but the article counters this by noting the severe reputational and allocation risks involved. "Walking away from committed orders would seriously strain relationships with ODMs, OEMs, and Nvidia, and would push the company to the back of the allocation line." This insight into the friction of the supply chain adds a layer of realism often missing from speculative headlines.

Nvidia will be the final step in the chain; a trailing indicator. Nvidia's print won't be the leading indicator of a bubble deflating.

The TSMC Verification and the Megatrend

To validate the demand, the piece looks upstream to TSMC, the foundry that manufactures Nvidia's chips. The editors highlight a crucial shift in TSMC's stance: they are no longer just taking orders at face value but are actively verifying demand with their customers' customers. Chipstrat reports, "TSMC needs to be convinced that the demand is legitimate... CC Wei: The AI demand actually continue to be very strong, it's more -- more stronger than we thought 3 months ago." This verification process is a powerful signal; if the foundry, which bears the massive capital risk of building factories, sees the demand as real, the risk of a sudden collapse diminishes.

The article draws a parallel to the dot-com bubble era, noting that while overbuilding is a risk, the current ecosystem is ramping to meet actual usage, unlike the "dark fiber" that sat idle in the early 2000s. "The GPUs/XPUs aren't sitting idle. Rather we see tweets about 'finding GPUs left and right to match demand'." This distinction between speculative inventory and active consumption is the piece's strongest defense against the bubble narrative. It suggests that the infrastructure is being built to serve a tangible, growing market for generative AI applications, not just hype.

The Profitability Gap and the Path Forward

However, the piece does not shy away from the elephant in the room: profitability. While the infrastructure build-out is robust, the business models for many AI startups and even some model providers remain unproven. Chipstrat argues that the current ecosystem is being subsidized by investors and hyperscalers who are funding losses in hopes of future returns. "The brunt of the costs are not shouldered by consumers, nor by model providers. The subsidization comes from the investors funding external and internal model labs." This is a sharp observation that identifies the true vulnerability of the sector. If the path to monetization—through subscriptions, enterprise APIs, or advertising—doesn't materialize, the capital could dry up.

The editors note that any slowdown would likely be a gradual tapering of future capital expenditure rather than a sudden halt. "If investors doubted OpenAI's ability to fund more compute, its infrastructure partners such as Microsoft would slow their forward CapEx plans. That would show up on earnings calls." This provides a clear roadmap for what to watch: not Nvidia's current sales, but the forward guidance of Microsoft, Amazon, and Meta. The piece effectively warns that the danger lies in a pause in funding that could stall the transition from infrastructure build-out to sustainable revenue generation.

Networking: The Hidden Giant

Beyond the GPUs, the article shines a light on a frequently overlooked segment: networking. Chipstrat highlights that Nvidia's networking business has become the largest in the world, generating $8.2 billion in a single quarter. "Our networking business—purpose built for AI and now the largest in the world— generated revenue of $8.2 billion, up 162% year-over-year." This is a staggering figure that rivals or exceeds the total revenues of traditional networking giants like Cisco and Arista. The piece argues that this dominance is not accidental but the result of a deliberate strategy to integrate networking directly with AI compute, creating a high-performance ecosystem that is difficult to replicate.

The editors point out that this networking dominance reinforces Nvidia's moat. "Nvidia is seemingly replacing, not losing, Infiniband sockets." This suggests that even as competitors try to build their own networking solutions, the performance requirements of large-scale AI clusters are keeping customers within the Nvidia ecosystem. This is a critical detail for understanding the company's long-term valuation, as it implies that the market is not just buying chips but buying a comprehensive, integrated platform.

Bottom Line

Chipstrat's analysis is a necessary corrective to the market's obsession with immediate earnings prints, successfully arguing that Nvidia's current performance is a lagging indicator of a much longer, more complex trend. The strongest part of the argument is the detailed mapping of the supply chain, which demonstrates why a sudden collapse is structurally unlikely. However, the piece's biggest vulnerability lies in its reliance on the assumption that the profitability gap will eventually close; if the monetization models for AI fail to materialize, the entire infrastructure build-out could become stranded. Readers should watch the capital expenditure guidance of the hyperscalers, not Nvidia's quarterly sales, for the true signal of the AI market's health.

Deep Dives

Explore these related deep dives:

  • Dot-com bubble

    The article repeatedly references 'AI bubble' concerns and compares current GPU demand to 'dark fiber' overcapacity from the telecom boom. Understanding the dot-com bubble provides essential historical context for evaluating whether AI infrastructure spending follows similar patterns or differs fundamentally.

  • TSMC

    TSMC is central to the article's supply chain analysis, with CEO C.C. Wei quoted extensively about capacity planning and customer verification. Understanding TSMC's unique position as the world's largest semiconductor foundry and its manufacturing monopoly at leading-edge nodes illuminates why Nvidia's demand signals matter so much.

  • Fabless manufacturing

    The article discusses the complex relationship between Nvidia (which designs chips), TSMC (which manufactures them), and ODMs/system integrators. Understanding the fabless semiconductor business model explains why Nvidia depends on TSMC and how the modern chip industry's division of labor creates the supply chain dynamics described.

Sources

Nvidia q3 earnings

by Various · Chipstrat · Read full article

Nvidia delivered a beautiful Q3 print on Wednesday, and there are several angles worth unpacking. But first, why was there any doubt about Nvidia’s earnings anyway? Yes, the AI concerns are very legitimate, but Nvidia’s earnings will be a trailing indicator of an AI bubble. We’ll discuss.

Then we can get into the call itself, including:

Networking business

Meta

China & Taiwan

Google

AI lab profitability concerns

Here we go:

Nvidia’s Earnings.

From The WSJ,

Nvidia reported record sales and strong guidance Wednesday, helping soothe jitters about an artificial intelligence bubble that have reverberated in markets for the last week.

Sales in the October quarter hit a record $57 billion as demand for the company’s advanced AI data center chips continued to surge, up 62% from the year-earlier quarter and exceeding consensus estimates from analysts polled by FactSet. The company increased its guidance for the current quarter, estimating that sales will reach $65 billion—analysts had predicted revenue of $62.1 billion for the quarter.

Another incredible quarter from Nvidia and an even more impressive guide.

Of Course Nvidia Would Deliver.

You didn’t expect anything less, right?

Forget the noise about stock market jitters, OpenAI profitability, or Michael Burry’s depreciation math. This quarter’s demand was locked in months ago, and Nvidia’s only job was to ship the hardware already spoken for.

After all, Nvidia’s direct and indirect customers signaled their GB300 demand quarters ago. Nothing since has altered that trajectory.

As a reminder, Nvidia mainly sells to ODMs and system integrators, who purchase the GPUs, build the racks, and deliver the AI factories to the clouds and hyperscalers:

Orders are planned well in advance of any given quarter. System builders like Dell, SuperMicro, Wiwynn, Quanta, Foxconn, are buying a ton of Grace Blackwell Ultras right now because Microsoft, Amazon, Google, Meta, Oracle, and CoreWeave already placed orders.

And we have confidence that Nvidia’s 2026 guide is legitimate because we’ve been told as much, up and down the chain:

Obviously, many GenAI-powered apps are already being used seriously day to day. Again, the illustration above only scratches the surface. All those apps are saying they need more inference tokens to meet user demand. Which means clouds, neoclouds, and hyperscalers need more AI clusters. Which means ODMs need to build more racks. And ODMs need more of Nvidia’s silicon - GPUs, CPUs, and Networking switches:

Which means Nvidia needs more wafers and packaging from ...