This is not a story about a tech CEO's ambition; it is a story about a critical national security vulnerability hidden in plain sight. Judd Legum uncovers that the massive AI supercomputer powering the Pentagon's latest contracts relies on thousands of tons of Chinese-made electrical transformers, components explicitly flagged by intelligence agencies as potential backdoors for sabotage. For a reader tracking the intersection of artificial intelligence and national defense, this evidence of a compromised supply chain is the missing link in understanding how the US government's most sensitive digital assets are actually built.
The Hardware Bottleneck
Legum begins by dismantling the narrative of American technological self-sufficiency. He points out that while xAI raced to build the world's largest AI supercomputer, the Colossus facility in Tennessee, it solved a critical power shortage by importing 2,218 metric tons of transformers from China. "xAI's use of Chinese-made transformers has not been previously reported," Legum writes, highlighting the opacity of the company's procurement process. This is a crucial distinction: the facility is not just a commercial venture; it is a government contractor working on "prototype frontier AI capabilities to address critical national security challenges."
The argument gains weight when Legum connects this hardware to existing intelligence warnings. He cites a report from Gladstone AI, a firm that consults for the US government, which warned that "the overwhelming majority of transformer substations contain components that were made in China and can be used as back-doors for sabotage operations." The framing here is stark. Legum suggests that the administration's reliance on xAI for defense work has inadvertently created a single point of failure. "If a superintelligence project were kick-started under nominal conditions, unsecured supply chains for AI hardware, as well as electrical and cooling infrastructure could embed physical CCP trojan horses deep into the data centers that house some of the most national security-critical technology America will ever build."
"Unsecured supply chains for AI hardware... could embed physical CCP trojan horses deep into the data centers that house some of the most national security-critical technology America will ever build."
Critics might argue that the risk of a physical "trojan horse" in a transformer is theoretical and that modern cybersecurity measures can isolate such threats. However, Legum counters this by noting that the Department of Energy has already seized Chinese transformers suspected of containing hardware capable of disrupting the national grid, and that rogue communication devices have been found in other Chinese-manufactured energy components. The evidence suggests this is not paranoia, but a documented pattern of vulnerability.
A Pattern of Negligence
The commentary shifts from the hardware itself to the culture of the company building it. Legum contrasts xAI's approach with its peers, noting that unlike Anthropic or OpenAI, xAI "does not include any supply chain security protocols in its risk management framework." This omission is presented not as an oversight, but as a feature of a company prioritizing speed over safety. Legum writes that xAI's safety advisor, Dan Hendrycks, "ignored questions about the company's cybersecurity practices and would not say whether xAI holds regular exercises to test for infrastructure vulnerabilities."
This lack of transparency is compounded by a refusal to share threat intelligence. Legum notes that xAI "does not report adverse events, security breaches, or cybersecurity threat intelligence to relevant governments." The implication is clear: the company is operating a critical piece of national infrastructure with a closed-door policy that defies standard defense contracting norms. As Legum puts it, "xAI's less rigorous approach to procurement aligns with its broader safety practices, which some AI experts have described as inadequate."
The stakes are raised further when Legum details a lawsuit filed by xAI against a former subcontractor, Aleksandr Shulgin, described as a "foreign national" with "Russian" ties. The lawsuit alleges Shulgin took unauthorized photos of the data center's interior, suggesting a "larger nefarious plan." Legum uses this incident to illustrate that the facility is already a target. "It appears these actions may be part of a larger nefarious plan," the company's attorneys wrote in the filing. This anecdote serves as a microcosm of the broader risk: if a single technician can breach physical security, what happens when a state actor exploits a hardware backdoor?
The Institutional Dilemma
The piece concludes by examining the government's role in this ecosystem. Despite the risks, the Department of War awarded xAI a contract worth up to $200 million, and the General Services Administration approved federal agencies to use xAI's chatbot, Grok. Legum highlights the tension here: the executive branch is increasingly reliant on a private entity that has demonstrated a willingness to bypass supply chain security norms. "The US government, meanwhile, has become increasingly reliant on xAI's services," Legum observes, noting that the Department of Homeland Security has reportedly used customized versions of Grok since at least May.
This reliance creates a paradox. The government is outsourcing critical national security capabilities to a company that appears to view security as an optional add-on rather than a foundational requirement. Legum's reporting forces the question of whether the administration is aware of the extent of the Chinese hardware dependency before signing these multi-million dollar checks. The evidence suggests that while the government warns about Chinese components in the grid, it is simultaneously funding a supercomputer built with those exact components.
Bottom Line
Judd Legum's reporting delivers a devastating critique of the current AI industrial complex: the race for supremacy has blinded key players to the most basic principles of national security. The strongest part of the argument is the concrete evidence of 2,000 tons of Chinese hardware in a Pentagon-linked facility, a fact that transforms abstract cybersecurity fears into a tangible physical risk. The biggest vulnerability in the current system is the assumption that speed of deployment justifies the compromise of supply chain integrity. The reader should watch for whether the Department of War will audit xAI's infrastructure or if the drive for AI dominance will continue to outpace the need for security.