Jordan Schneider cuts through the fog of geopolitical "vibes" to ask a brutally simple question: what is the actual price of intelligence? In a landscape dominated by speculation about chip bans and national ambition, Schneider's piece stands out for its refusal to guess. Instead, he builds a granular financial model comparing the cost-per-unit of AI in the United States versus China, revealing that the race is not won by who shouts the loudest, but by who can physically fit the most silicon into a building without melting it.
The Real Cost of Compute
Schneider's most striking insight is that the common narrative about energy costs is a distraction. While headlines obsess over cheap Chinese electricity or American grid constraints, the math tells a different story. "Other costs, including commonly covered topics like electricity and water, are essentially rounding errors," Schneider writes. This reframing is crucial; it forces the reader to look at the true bottlenecks: construction and, most critically, hardware.
The author anchors his analysis in a specific benchmark: a 400-megawatt data center, modeled after Microsoft's Fairwater 1 facility. By fixing the size and timeline, Schneider isolates the variables that actually matter. He notes that while China holds a distinct advantage in construction—spending roughly $2.4 billion versus the U.S.'s $4 billion for the same footprint—this savings is quickly eroded by the hardware gap. "The U.S. can build much more cost-efficient data centers compared to China, but unfettered access to the H200 would make the race in raw performance extremely close," he argues. This is a nuanced take that avoids the trap of declaring a clear winner; instead, it highlights a fragile equilibrium where policy shifts can tip the scales.
The AI race between the U.S. and China will be decided in datacenters.
The Hardware Bottleneck
The core of Schneider's argument rests on the disparity between American and Chinese silicon efficiency. He contrasts Nvidia's GB200 NVL72 with China's best domestic alternative, Huawei's CloudMatrix384. The numbers are stark: the Chinese chip consumes four times the power for half the performance. "Because of export controls, the hardware stocked in Chinese data centers would not be as efficient as their American counterparts," Schneider explains. This inefficiency means a Chinese facility can house far fewer units, drastically capping its total computational output.
Schneider's analysis of the H200 ban lift is particularly timely. He suggests that allowing this specific chip into China could be a game-changer, potentially "alleviating their silicon constraints." However, he tempers this optimism with a reality check on network architecture. Unlike the integrated rack solutions of the GB200, the H200 requires complex node-level networking, which introduces overhead. "Although access to the H200 gains significantly more 'AI' for China compared to the CloudMatrix384, the total computing power and efficiency of compute would still be less compared to an American data center," he concludes. This distinction between raw chip count and usable performance is a vital correction to the hype often surrounding export control news.
Critics might argue that Schneider's reliance on theoretical maximums underestimates the ingenuity of Chinese engineers in optimizing workarounds, such as the smuggling networks mentioned in the article's introduction. Yet, the sheer physical limits of power and heat dissipation remain a hard constraint that software optimization cannot easily bypass.
The Empty Data Center Paradox
Perhaps the most sobering section of the piece addresses the irony of China's current situation: a surplus of empty buildings. Schneider points out that "many data centers in China are sitting idle due to the combination of a lack of cutting-edge chips and the yet-to-arrive massive AI demand." This highlights a critical divergence in strategy. While American hyperscalers are ramping up capital expenditure by over 35% for 2026, Chinese giants like Tencent have cut spending by 25% due to hardware shortages.
The author draws a sharp contrast between the two nations' constraints. For the U.S., the problem is energy; for China, it is silicon. "It will not matter how cheaply China can build a data center if they don't have chips to stock them or models to constantly use them," Schneider writes. This observation shifts the focus from a simple arms race to a race of logistics and supply chain resilience. The comparison to China Telecom's massive Inner Mongolia Information Park serves as a poignant reminder that scale without capability is merely an expensive monument.
China can build cheaper, but the U.S. can build better.
Bottom Line
Schneider's greatest strength is his refusal to let political rhetoric obscure the engineering reality. By quantifying the cost of a floating-point operation, he demonstrates that the U.S. advantage is not just in having better chips, but in the ability to deploy them at a scale and efficiency that Chinese infrastructure cannot currently match. The piece's biggest vulnerability lies in its assumption of static technology; if Chinese manufacturing yields improve rapidly or if the U.S. grid fails to expand, the calculus could shift overnight. For now, however, the verdict is clear: the side that solves its specific bottleneck—electricity for America, silicon for China—will dictate the pace of the future.