Anthropic: Blacklisted at Home, Despised Abroad
On February 23, 2026, Anthropic was summoned to the Pentagon and simultaneously published a blog post accusing three Chinese artificial intelligence labs — DeepSeek, Moonshot/Kimi, and MiniMax — of industrial-scale model distillation. Within days, President Trump labeled the company a "radical left, woke company" and blacklisted it from all federal contracts. Secretary of War Hegseth threatened to designate Anthropic a national security supply chain risk, a classification previously reserved for foreign adversaries like Huawei. ChinaTalk's reporting surveys how Chinese and Taiwanese media digested this extraordinary week, and the picture that emerges is one of a company caught in a geopolitical vise with no good exits.
The Schadenfreude Circuit
The dominant Chinese reaction was mockery. The article notes that Anthropic had already earned the nickname "AI Thanos" after its February product releases cratered software stocks, with IBM dropping 13 percent and CrowdStrike falling 6.5 percent. When the company that had pushed hardest in Washington for compute restrictions on Chinese firms suddenly found itself on the receiving end of the same punitive apparatus, Chinese social media erupted with irony.
"Anthropic, which has done more than any other Western AI company to frame China as a threat, may now be deemed the same 'supply chain risk' designation historically reserved for Chinese companies like Huawei."
The schadenfreude is understandable. Anthropic had banned Chinese-controlled companies from its services in September 2025, reportedly labeled China an enemy state in internal documents, and lobbied aggressively for export controls. That same company then got blacklisted by its own government — not for being too hawkish on China, but for being insufficiently obedient to the Pentagon.
Critics might note that Chinese commentary conveniently elides the substantive question of whether distillation actually occurred. The article frames the accusations as "a bad-faith political attack dressed up as a security concern," but the technical evidence Anthropic presented — tracking API query patterns to specific researchers at Chinese labs — has not been refuted so much as recontextualized as proof of surveillance overreach.
The Open-Source Counterpunch
Several Chinese outlets seized on Anthropic's distillation report as an inadvertent advertisement for open-source AI. The logic is straightforward: if a closed-source provider can monitor your queries closely enough to identify individual researchers by their usage patterns, then relying on closed-source services means surrendering privacy and autonomy to the provider's discretion.
"Anthropic, intending to attack its competitors, inadvertently became the most powerful advertisement for open-source AI. Their actions demonstrated to everyone that under the architecture of closed-source AI services, your privacy, your autonomy, and your right to know are all unprotected."
The ChinaTalk authors note that this argument is "a bit presumptuous," since open-source models also run API businesses with comparable visibility into customer behavior. The real distinction is self-hosting — running a model locally with no calls back to the developer. That is a genuine architectural difference, and the Chinese commentary lands a point here: Anthropic's detailed forensic capabilities make the case for local deployment more compelling than any open-source advocate's white paper ever could.
Consistent Idealism or Convenient Positioning?
Not all Chinese analysis was dismissive. The piece highlights a more sophisticated reading from Xinzhi Observatory, a column on the nationalist outlet Guancha, which argues that Anthropic's behavior toward both China and the Pentagon is internally coherent. The company's worldview descends from effective altruism (EA) and longtermism — philosophical movements concerned with existential risk from advanced AI systems.
"[Amodei's] core argument is not 'a particular country is dangerous' but 'highly capable AI is inherently dangerous.' In his view, regardless of whose hands a model falls into, the absence of constraints is sufficient for it to be weaponized for mass surveillance or autonomous weapons systems."
This reading is generous but defensible. Anthropic's Responsible Scaling Policy does draw red lines in both directions — restricting access to Chinese labs and refusing to strip safety guardrails for the Pentagon. The problem, as the article implicitly acknowledges, is that principled consistency offers no political protection. Anthropic alienated both superpowers simultaneously, which is either admirable integrity or catastrophic strategy depending on whether the company survives the next twelve months.
The Structural Argument
The most consequential thread in the Chinese commentary concerns what the Anthropic crisis reveals about American AI governance. TMTPost, a leading Chinese business and technology outlet, declared the dispute a watershed moment.
"This marks the moment when the covert power struggle between Washington and Silicon Valley — over AI control, the limits of military applications, and tech ethics — finally dropped all pretense and broke into open, no-holds-barred confrontation."
The Chinese framing is that the United States is discovering — messily, publicly — what China resolved structurally years ago: frontier AI is a dual-use technology, and the state's claim on it supersedes any company's ethical preferences. In China, commercial AI companies have never operated under the illusion that they could refuse a government request for military cooperation or domestic surveillance data. The entire Anthropic drama, from this perspective, is the American system catching up to a reality Beijing internalized long ago.
"[…] idealists like Anthropic who try to walk a tightrope between commerce and ethics are destined to be under the wheels of power […] In the track of artificial general intelligence (AGI), there has never been a so-called 'neutral zone.'"
A counterargument is that the messiness is the feature, not the bug. The fact that an American AI company can publicly refuse the Pentagon, that 550 employees from Google and OpenAI signed an open letter in support, that the dispute plays out in courts and press conferences rather than behind closed doors — all of this reflects a system where power is contested rather than silently imposed. The Chinese commentary treats state capture of AI as an inevitability. It may instead be a choice, and the American system is still making that choice in public.
The Taiwan Angle
The sharpest analysis comes from Taiwan, where the stakes are not abstract. Pei-Shiue Hsieh at Taiwan's Institute for National Defense and Security Research (INDSR) frames the dilemma in starkly realist terms: non-democratic regimes enjoy an "asymmetric advantage" in military AI because they face no internal friction over ethics, oversight, or corporate refusal.
"Let us posit a scenario here: Anthropic's resistance succeeds and triggers a chain reaction, Silicon Valley's tech mainstream reverts to its stance of withdrawing from defense contracts, and the U.S. military's military AI development is severely impeded as a result. Meanwhile, China is able to integrate AI into all manner of military R&D without restraint, ultimately achieving an overwhelming advantage in military AI — particularly in 'lethal autonomous weapons.' Would such a world be safer?"
This is the hardest question in the article, and ChinaTalk lets it sit without easy resolution. Another Taiwanese commentator, writing under the name "Future Lin," zeroes in on the talent implications: if the U.S. government can designate any tech company a national security threat for refusing military demands, the country's ability to attract top global AI researchers — many of whom chose America precisely because they could build safety-focused companies without state coercion — is fundamentally compromised.
"For decades, the U.S. tech industry's advantage has partly stemmed from its relatively independent operating logic — the government can procure, but it cannot make unlimited demands. This boundary is one of America's invisible assets for attracting top global AI talent."
The Taiwanese social media commentary is blunter. One Threads user called Anthropic's stand simply foolish, arguing that a company without Google's infrastructure or hardware capabilities had no leverage to pursue a "tech-lefty agenda." A PTT commenter reduced the entire debate to one sentence: if the People's Liberation Army asked DeepSeek founder Liang Wenfeng to restrict military use of his model, would he dare refuse?
Bottom Line
ChinaTalk's piece is strongest in its sourcing breadth — pulling from state media, independent analysis firms, Taiwanese defense researchers, and social media across multiple platforms and languages. The juxtaposition of Chinese schadenfreude, Taiwanese realism, and the Xinzhi Observatory's more charitable reading of Anthropic's motives creates genuine analytical depth. The article is weakest where it lets Chinese framing go unchallenged: the argument that state control of AI is an inevitable structural reality, rather than a political choice with its own costs and failure modes, deserves more pushback than it receives. The piece also understates how much the distillation accusations — the technical substance of whether Chinese labs actually stole Anthropic's model outputs at scale — matter independently of their political timing. Still, as a survey of how the most consequential AI governance crisis of 2026 landed across the Chinese-speaking world, it is thorough, well-translated, and genuinely illuminating.