← Back to Library

Contra scott on AI safety and the race with China

Rohit Krishnan tackles the most polarizing question in artificial intelligence: whether safety regulations will hand a strategic victory to China. While many assume that slowing down domestic innovation is a losing move, Krishnan argues that the cost of safety is often exaggerated, yet he ultimately rejects the conclusion that regulations are harmless. He forces us to confront a complex reality where the "race" metaphor itself may be flawed, and where the devil is not just in the details, but in the specific dimensions of the future we are trying to predict.

The Hidden Cost of Compliance

Krishnan begins by dismantling the common economic argument that safety regulations are negligible. He notes that Scott argues safety rules add only a "1-2% overhead" to training costs, citing internal testing figures that seem trivial against billion-dollar budgets. However, Krishnan pushes back, pointing out that this calculation assumes safety work is strictly separable from core engineering, which is rarely true in practice.

Contra scott on AI safety and the race with China

He illustrates this with a powerful analogy from the tech sector: "When you add 'coordination friction', you reduce the velocity of iteration inside the organisation. Velocity here really really matters, especially if you believe in recursive self improvement." The author suggests that the strain of compliance on a company's culture and speed far exceeds the direct dollar cost, much like how a massive legal department at a company like Facebook creates operational drag that isn't captured in a simple percentage of the budget.

"The strain they put on the business far exceeds the 2.5% cost it puts on the output."

Critics might argue that this friction is a necessary price for stability, but Krishnan's point is that in a high-stakes technological race, velocity is a primary asset. If safety regulations slow down the iteration cycle, they could inadvertently cede ground to competitors who face fewer such hurdles, regardless of the raw cost of the rules themselves.

The Illusion of the "Fast Follower"

A central pillar of the pro-regulation argument is the belief that China is merely a "fast follower" focused on applications rather than foundational breakthroughs. Krishnan finds this claim "awfully load bearing" and potentially convenient for those who want to avoid strict oversight. He points out that Chinese leaders like Liang Wenfeng have explicitly stated their belief in superintelligence, contradicting the narrative that they are only interested in short-term deployment.

He writes, "China is also known for strategic communication in more than one area, where what they say isn't necessarily what they focus on." The author highlights that China has already demonstrated the ability to lead in sectors like electric vehicles, solar panels, and 5G, suggesting that their capacity for rapid innovation is not limited to copying others. If the Chinese government perceives a strategic opportunity, the "CCP has an extraordinary track record of redirecting capital in response to perceived strategic opportunity (and overdoing it)."

This reframing is crucial. It suggests that assuming China will remain a follower is a dangerous gamble. If they pivot to foundational research once they see the potential, the window for the US to establish a secure, safe lead could close much faster than current models predict.

The Regulatory Ratchet and Uncertainty

Krishnan is equally critical of the current regulatory landscape, noting that many proposed laws are vague and could create a "compliance morass." He observes that regulations often start with common sense guardrails but expand into an "invisible graveyard" of bureaucracy, citing the "regulatory ratchet" seen in finance and aviation. He asks a piercing question about the utility of current proposals: "Right now they ask for a combination of red-teaming (to what end), hallucination vs sycophancy (how do you measure)... These assume a very particular threat surface."

The author argues that unless regulations are highly specific with a visible return on investment, they risk becoming an albatross around the neck of innovation. He warns that "the regulatory ratchet is real... We always have common sense guardrails that creates an apparatus that then expands." This creates a scenario where the US could become the "Brussels of AI," prioritizing process over product, which might be a significant tradeoff in a global competition.

"If we relax the assumptions... we might end up in places where AI safety regulations are more harmful than useful."

The Decision Tree of the Future

Ultimately, Krishnan refuses to give a simple yes or no answer. Instead, he proposes a decision tree with multiple dimensions: takeoff speed, alignment difficulty, and the durability of the US compute advantage. He suggests that the outcome depends entirely on which "world" we end up in. In a world where "recursive self improvement" happens quickly, safety regulations could be catastrophic if they slow down the first mover. In a world where AI development is gradual, safety might be a net positive.

He concludes that the question isn't just about safety versus speed, but about the payoff function of winning or losing. "In 'mundane AI' world, we get awesome GPTs but not a god. Losing means we're Europe... In 'AI is god' world, losing is forever." This distinction forces the reader to consider that the stakes are not uniform; the cost of getting it wrong depends entirely on the nature of the technology we are building.

Bottom Line

Krishnan's most compelling contribution is his refusal to accept the binary narrative that safety regulations are either a silver bullet or a death sentence for US competitiveness. His argument is strongest when he highlights the "coordination friction" that regulations introduce, a cost often ignored in high-level economic models. However, his reliance on complex decision trees leaves the reader with a sense of uncertainty rather than a clear path forward. The key takeaway is that the specific design of regulations matters more than the mere existence of them; vague, broad rules risk slowing the US down without actually making the technology safer.

"The devil, as usual, is in the really annoying details."

Readers should watch for how specific state-level laws, like those in Colorado or California, are implemented, as these will serve as the first real-world test of whether safety measures act as guardrails or as speed bumps in the global race.

Sources

Contra scott on AI safety and the race with China

by Rohit Krishnan · Strange Loop Canon · Read full article

Scott Alexander has a really interesting essay on the importance of AI safety work, arguing it will not cause the US to fall behind China, as is often claimed. It’s very well written, characteristically so, and well argued. His argument, in a nutshell ( I paraphrase) is:

US has ~10x compute advantage over China

Safety regulations add only 1-2% to training costs at most

China is pursuing “fast follow” strategy focused on applications anyway

Export controls matter far more (could swing advantage from 30x to 1.7x)

AI safety critics are inconsistent - they oppose safety regs but support chip exports to China

Sign of safety impact is uncertain - might actually help US competitiveness

I quite like this argument because I actually agree with all of the points, mostly anyway, and yet find myself disagreeing with the conclusion. So I thought I should step through my disagreements, and then what my overall argument against it is, and see where we land up.

First, the measurement problem

Scott argues that the safety regulations we’re discussing in the US only adds 1-2% overhead. This is built off of METR and Apollo’s findings, around $25m for internal testing, and contrast this with $25 Billion for training runs. All the major labs also already spend enormous sums of money on intermediate evaluations, model behaviour monitoring and testing, and primary research to make them work better with us, all classic safety considerations.

This only holds if the safety regulation based work, hiring evaluators and letting them run, is strictly separable. Which is not true of any organisation anywhere. When you add “coordination friction”, you reduce the velocity of iteration inside the organisation. Velocity here really really matters, especially if you believe in recursive self improvement, but even if you don’t.

This is actually visible in ~every organisation known to man. Facebook has a legal department of around 2000 employees, doubled since pre Covid, of a total employee base of 80,000. Those 2000 are quite likely not disproportionately expensive vs the actual operating expenditure of Facebook. But the strain they put on the business far exceeds the 2.5% cost it puts on the output. There’s a positive side of this argument, they will also prevent enough bad things from happening that the slowdown is worth it. Presumably Facebook themselves believe this, which is why they exist, but it is very much not as simple as comparing ...