← Back to Library

Mantic Monday: Groundhog day

Prediction Markets Shrug Off the Pentagon

Scott Alexander opens this installment of Mantic Monday with the Pentagon's unprecedented designation of Anthropic as a "supply chain risk" -- a label never before applied to an American company. The move, orchestrated by Secretary of War Hegseth, was widely interpreted as political retaliation. But the prediction markets barely flinched.

Upon the "supply chain risk" designation, predicted value at IPO fell from about $550 billion to $475 billion - then, after a day or two, went back up to $550 billion. No effect!

Mantic Monday: Groundhog day

Alexander walks through the logic with evident satisfaction. Hegseth's own actions undercut his case: he admitted the government would keep using Anthropic for six months, and signed a nearly identical contract with OpenAI. The markets gave the designation about a 28% chance of surviving legal challenge. Even in that scenario, the damage would be limited.

The key distinction, as Alexander explains, is between Hegseth's sweeping rhetoric and the actual legal scope of the designation. Hegseth's tweet implied that any company doing business with the military would have to sever ties with Anthropic entirely -- a potentially fatal blow given that Amazon, Google, and Microsoft host Anthropic's compute infrastructure. But the legal reality is far narrower.

In other words, the "supply chain risk" designation only means that companies can't use Anthropic products in their specific Department of War contracts. So if Amazon is doing 95% normal civilian cloud compute stuff, and 5% special government contracts, only 5% of their contracts are affected. This is trivial!

Alexander notes that all three tech giants own roughly 10% stakes in Anthropic and have billions riding on its success. The notion that Hegseth would pick a fight with Amazon, Google, and Microsoft simultaneously strains credulity. The most likely outcome: lobbyists have a quiet conversation, the letter of the law gets enforced, and Anthropic loses perhaps 10% of revenue while gaining something money cannot easily buy -- brand recognition that finally rivals ChatGPT.

Claude went from #120 on the App Store in January, to #1 this weekend, apparently driven by people who heard about the Pentagon standoff and were impressed by their principled stance.

Alexander is impressed by the markets' boldness here, and says so directly. One might push back slightly: prediction markets are good at pricing known risks, but less reliable when the risk is an unpredictable political actor escalating beyond rational self-interest. Hegseth may not be rational, and the markets might be underpricing tail risk from a vindictive administration.

The 2026 Midterms and the Specter of Chaos

The article pivots to the November 2026 midterms, where prediction markets give Democrats an 80% chance of winning the House and a 20-40% chance of taking the Senate. Standard midterm dynamics plus low presidential approval ratings make this unremarkable. What makes it remarkable is the Republican effort to change the rules before the game is played.

Alexander identifies two separate initiatives. The SAVE Act would require passports, birth certificates, or Real IDs for first-time voter registration. Then there are rumors of something far more dramatic: an executive order declaring a national emergency and seizing federal control of elections.

The order would say that foreign countries have been rigging US elections (some commenters speculate that maybe Maduro could be granted clemency for "admitting" to this), and respond with a series of extreme measures. These would include banning voting machines, restricting vote-by-mail, and requiring all voters to re-register before the election.

Alexander lays out three scenarios if any of this survives judicial review. Pure chaos, where the logistics of re-registering every voter in six months simply overwhelm the system. A blue wave, since strict requirements disproportionately filter for high-motivation, high-education voters who skew Democratic. Or a cynical middle path, where enforcement varies by district to favor Republicans.

The irony of restrictive voting measures backfiring on their proponents is well-established political science, and Alexander handles it deftly. Democrats are more likely to own passports. Liberal women are less likely to have changed their surnames. The structural advantages compound.

Still, the markets are relatively sanguine. Metaculus gives a 92% chance that international observers will consider the election fair. Alexander summarizes the forecaster consensus neatly:

I think the best summary of forecasters' views on the midterms is that there's a decent chance (~50%) Trump tries to change the rules around mail-in ballots, and a modest chance (~25%) he tries something more extreme - but that it probably won't make much difference, the election will still be considered fair by international observers, and Democrats will still win.

That 25% chance of martial law from Metaculus deserves more alarm than Alexander gives it. He acknowledges the scenario -- Trump refuses to accept midterm results, Democratic protests turn unruly, martial law follows -- but treats it almost as an afterthought. A one-in-four chance of martial law in a Western democracy is not a reassuring number, even if markets assign higher probability to normalcy.

Groundhogs, Tortoises, and Statistical Mischief

In the lightest section of the piece, Alexander turns his attention to the forecasting accuracy of groundhogs. Punxsutawney Phil, the famous prognosticator, performs below chance. But Staten Island Chuck boasts an 85% accuracy rate.

That's p = 0.0002 - plenty significant even after a Bonferroni correction for multiple magic groundhogs.

The explanation is delightfully mundane. Chuck predicts spring on 25 of 31 occasions. If early springs are the default on Staten Island, the groundhog is barely more than a broken clock that happens to be right most of the time. Alexander pairs this with the case of Mojave Max, a Las Vegas tortoise who stubbornly predicts long winters and manages only a 20% success rate in the desert. The section is a compact lesson in base rates dressed up as animal comedy.

Iran and the Limits of Regime Change

Alexander surveys the prediction markets on the Iran airstrikes with characteristic efficiency. The markets give less than 50-50 odds that the current campaign topples the regime. The killing of the Ayatollah did not trigger the mass uprising America had hoped for, and the remaining clerical establishment appears ready to appoint a successor and carry on.

A hardline cleric named Alireza Arafi is the weak favorite to become the next Supreme Leader. There is a 15% chance the position is abolished before a successor is named. The Strait of Hormuz is under threat, with forecasters thinking Iran can probably reduce traffic below 20% of normal levels. Markets expect 6 to 100 U.S. casualties. And Polymarket gives a 40% chance that the fall of the current regime would lead to reinstating the monarchy under Reza Pahlavi, heir to the Shahs.

Alexander offers a clean summary of the strategic picture: America kills leaders and bombs military sites, hoping to provoke collapse. Iran holds on and inflicts enough pain to make America lose interest. Most likely resolution: within a month. But the tail stretches to next year.

The Hedging Revolution

The most forward-looking section covers MNX, a new cryptocurrency-based futures exchange from Manifold Markets founders Stephen Grugett and Ian Philips. Alexander uses this as a launching pad to discuss Vitalik Buterin's recent essay on the structural problems with prediction markets.

Prediction markets have two types of actors: (i) "smart traders" who provide information to the market, and earn money, and necessarily (ii) some kind of actor who loses money.

Buterin identifies three kinds of money-losers: naive traders with bad opinions, information buyers who subsidize markets as a public good, and hedgers who accept negative expected value in exchange for reduced risk. The first category, Buterin argues, creates perverse incentives for platforms to cultivate communities of people with bad judgment. The second has a public goods problem. The third -- hedging -- is the sustainable path forward.

Alexander explains MNX's thesis with clarity. Polymarket owns gambling. Metaculus owns information aggregation. Hedging is the unclaimed territory, and it sits adjacent to a two-trillion-dollar derivatives market. The bet is that AI agents will soon make it trivially cheap to construct sophisticated hedge portfolios, turning what was once the province of elite quantitative funds into something any investor in a seaside resort can access.

If you invest in a seaside resort, your AI can figure out the chance of a hurricane, and of a tsunami, and of an oil spill, and of a thousand other things, and buy a tiny share of each on the prediction markets, and feel confident that you're expressing your exact thesis (seaside resorts are good) separate from any acts of God that might disturb it.

The vision is compelling, though the track record of crypto prediction market ventures should temper enthusiasm. Alexander himself notes that Augur raised five million dollars and never shipped usable software, and FTX failed to get prediction contracts off the ground despite billions in resources. MNX's advantage is timing: better crypto infrastructure, vibecoding, and a founder who already built a successful prediction market platform.

Bottom Line

This Mantic Monday covers an unusually wide range -- from Pentagon politics to groundhog statistics to the future of decentralized finance -- and Alexander ties it together with the thread that runs through all his prediction market writing: the markets are often smarter than the pundits. The Anthropic analysis is the strongest section, offering a genuinely counterintuitive take backed by market data. The midterms section is thorough but perhaps too sanguine about the martial law scenario. The Iran coverage is workmanlike, the groundhog section is charming, and the MNX discussion points toward a future where prediction markets graduate from gambling curiosities to genuine financial infrastructure. Whether that future arrives depends on whether the hedging thesis can attract enough capital to matter -- a question that, fittingly, deserves its own prediction market.

Deep Dives

Explore these related deep dives:

  • Antifragile Amazon · Better World Books by Nassim Nicholas Taleb

    Things that gain from disorder — a framework for thriving in uncertainty.

  • The Black Swan Amazon · Better World Books by Nassim Nicholas Taleb

    Why highly improbable events dominate history and how we systematically underestimate them.

  • Alireza Arafi

    The article uses the designation of an Iranian official as a precedent for the unprecedented 'supply chain risk' label applied to Anthropic, illustrating how such designations are typically reserved for foreign adversaries rather than domestic firms.

  • Perpetual futures

    The article uses this specific derivative instrument on Ventuals.com to demonstrate how prediction markets instantly priced in and then dismissed the Pentagon's 'supply chain risk' designation against Anthropic.

Sources

Mantic Monday: Groundhog day

by Scott Alexander · Astral Codex Ten · Read full article

Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business.

On Friday, the Pentagon declared AI company Anthropic a “supply chain risk”, a designation never before given to an American company. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it?

Anthropic isn’t publicly traded, so we turn to the prediction markets. Ventuals.com has a “perpetual future” on Anthropic stock, a complicated instrument attempting to track the company’s valuation, to be resolved at the IPO. Here’s what they’ve got:

Upon the “supply chain risk” designation, predicted value at IPO fell from about $550 billion to $475 billion - then, after a day or two, went back up to $550 billion. No effect!

A coarser yes-no Polymarket tells the same story:

The chance of Anthropic getting a $500 billion+ valuation in 2026 fell from 90% to 76%, before rebounding to 83%.

Why have the markets shrugged off this seemingly important event?

Partly it’s because Anthropic seems likely to win on appeal. Hegseth has said the government will keep using Anthropic for the next six months (undermining his case that they’re a national security risk) and has signed a substantially similar contract with OpenAI (undermining his case that their contract terms were unworkable). The prediction markets think the courts will be sympathetic:

But even in the 28% of timelines where the designation sticks, things don’t seem so bad. Secretary of War Hegseth originally tweeted that:

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

Framed this way, the Pentagon’s actions sound devastating. Anthropic relies on compute to train and run its AIs. Most of this compute is in data centers owned by Amazon, Google, and Microsoft. At least Amazon and Microsoft have contracts with the US military. If they had to drop Anthropic, it would make it impossible for the company to stay a frontier AI lab.

But in their own blog post, Anthropic described the situation differently:

If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is ...