Scott Alexander dismantles a pervasive anxiety in the tech world: the fear that prioritizing AI safety will cause the United States to lose its technological edge to China. The piece's most striking claim is not that safety doesn't matter, but that the specific regulations currently on the table are so financially negligible compared to the cost of training models that they cannot possibly tip the scales of the geopolitical race. For busy leaders watching the chips and data centers, this reframes the entire debate from a zero-sum trade-off to a manageable cost of doing business.
The Three Layers of the Race
Alexander begins by dissecting the "race" into three distinct strata: compute, models, and applications. He argues that the United States holds a commanding lead in the first two, while China is banking on the third. "America is far ahead," he writes, noting that "by the simplest measure - total FLOPs on each sides - we have 10x as much compute as China, and our advantage is growing every day." This is a crucial distinction often lost in headlines that conflate hardware capacity with software output. The author attributes this lead to the dominance of NVIDIA chips and the massive capital expenditure boom by American tech giants, a situation where the sheer volume of investment creates a moat that policy tweaks cannot easily breach.
The argument gains depth when Alexander contextualizes the hardware landscape. He points out that while China is far behind in chip production, they have a "long history of catching up to the West on things when they put their mind to it," a pattern reminiscent of their rapid ascent in solar manufacturing and high-speed rail infrastructure. However, the author suggests that China's strategy is not to win the compute war, but to bypass it. They are betting on a "fast follow" approach: accepting a 1-2 year lag in model sophistication while aggressively deploying whatever AI they have into manufacturing, drones, and weapons systems. "If our smarter AI is still just sitting in a data center answering user queries - and their dumber AI is already integrated with tens of thousands of humanoid robots... then they still win," Alexander observes. This highlights a critical vulnerability: the race isn't just about who builds the smartest brain, but who can put it to work in the physical world fastest.
If we win this round beyond our expectations, the next generation of AI safety asks is third-party safety auditing and location verification for chips. I don't know the exact details, but these don't seem order-of-magnitude worse than the current bills.
The Math of Safety
The core of Alexander's thesis rests on a simple, almost anti-climactic calculation regarding the cost of safety. He breaks down the proposed regulations—such as disclosure of model specs, whistleblower protections, and evaluations for biohazards or hacking capabilities—and finds the financial impact trivial. "Currently, two nonprofits - METR and Apollo Research - do similar tests on publicly-available models," he notes, estimating their budgets at $5 million and $15 million respectively. Even if a major lab like OpenAI had to replicate this work at a higher cost, Alexander estimates the total expense would be a fraction of the training run. "The safety testing might increase the total cost by 1/1000th," he writes, and even if activists push for more rigorous auditing, the cost might rise to a mere 1% of training expenses.
This is where the argument lands with significant force. If the US has a 10x compute advantage, a 1% cost increase for safety reduces that lead to roughly 9.8x. "So if we were able to train a model 10x bigger than China's best model before safety legislation, we can train a model ~9.8x bigger than China's best model after safety legislation," Alexander concludes. The logic is sound: the fear that safety regulations will hand the victory to China is mathematically unfounded based on current proposals. However, a counterargument worth considering is that the complexity of compliance, rather than the direct dollar cost, could slow down the pace of iteration. Alexander acknowledges this, noting that activists might move from small asks to larger ones, but he remains skeptical that the burden will ever reach an order of magnitude that matters in a race defined by exponential growth.
The Real Threat: Export Controls and Lobbying
Where Alexander's analysis becomes most urgent is in shifting the blame from safety regulations to the actual erosion of the US lead: chip smuggling and corporate lobbying. He identifies a stark contradiction in the political landscape. "Some say it would be like selling Russia our nukes during the Cold War, or selling them our Saturn V rockets during the space race," he writes regarding the proposal to lift export controls on advanced chips. Yet, he points out that the very people warning against safety regulations are often the same ones pushing for these exports.
The author exposes a disturbing reality: the US government is underfunded and outgunned by corporate interests. The Bureau of Industry and Security, tasked with stopping chip smuggling, operates on a budget of about $50 million a year. "If America cared about winning the race against China even a tenth as much as Mark Zuckerberg cares about winning the race against OpenAI, we would be in a much better position!" Alexander quips, highlighting the absurdity of the funding gap. Meanwhile, NVIDIA, America's most critical defense in this arena, "constantly lobbies to be allowed to sell its advanced chips to China," even attempting to influence the political landscape against those who resist.
It would decrease our compute advantage from 10-30x to about 2x. You can read the report for more scenarios, including one where aggressive chip exports actually give China a compute advantage.
This section of the commentary reveals the true stakes. The threat to American leadership isn't a bureaucratic form filled out by a safety officer; it's a decision in a boardroom to prioritize short-term revenue over long-term national security. Alexander notes that if the administration caves to these lobbying efforts, the US compute advantage could collapse from a 30x lead to a mere 2x, effectively handing China the keys to the kingdom. The irony is palpable: the fear of "losing the race" is being used as a cudgel to weaken the very mechanisms (safety rules) that pose no real threat, while simultaneously ignoring the mechanisms (export controls) that do.
The Application Layer Trap
Alexander concludes by warning that the real danger lies in the application layer, where US regulations could inadvertently stifle innovation. He cites the Colorado AI Act of 2024, which mandates impact assessments for algorithmic discrimination. While well-intentioned, Alexander argues this creates a "constant miasma of fear and bureaucracy over small businesses and nonprofits." This is a nuanced point: safety regulations focused on the model layer (superintelligence) are cheap, but ethics regulations focused on the application layer (bias, hiring, healthcare) could be paralyzing. "Some startups might be strangled in their infancy," he warns, suggesting that China's command economy could steamroll these hurdles while the US gets bogged down in litigation and compliance.
Critics might note that Alexander's dismissal of the application-layer risk is too optimistic. If the US creates a regulatory environment where only the largest incumbents can afford to deploy AI, it could indeed cede the application layer to China, regardless of the model's raw intelligence. However, Alexander's distinction remains vital: these are not "AI safety" regulations in the sense of preventing superintelligence; they are civil rights and consumer protection laws. Conflating the two, he argues, is a strategic error that plays into China's hands by distracting from the real issue: the need to harden critical infrastructure against AI attacks while fostering a vibrant ecosystem for deployment.
Bottom Line
Scott Alexander's piece is a vital corrective to the panic that safety and security are mutually exclusive. His strongest argument is the mathematical proof that current safety proposals are too small to erode the US lead, a fact that should silence the loudest voices in the "race at all costs" camp. The article's greatest vulnerability, however, is its reliance on the assumption that future regulations will remain as modest as current bills, ignoring the potential for a regulatory creep that targets the application layer. The real lesson for the reader is to stop worrying about safety audits and start watching the export control loopholes; the race will not be lost in a boardroom of safety officers, but in a lobbying office of chip manufacturers.