The Flourishing Argument
A provocative challenge to longtermism's dominant strain has emerged from within the movement itself. Bentham's Bulldog, writing for Forethought, argues that survival-focused efforts may be misallocating attention when the greater threat is failing to achieve an optimal future.
Value Is Fragile
Bentham's Bulldog writes, "most future value is lost from failure to get near-optimal futures, and yet almost no one is working to get a near-best future." The core claim rests on an empirical observation: value is fragile. Only a small fraction of possible futures approach optimality, and there is no inevitable force guaranteeing a really good future.
The argument draws on historical induction. No society in history has been near-optimal. For most of history, people owned slaves, and this left society worse than it might have otherwise been. We've created giant torture farms where we mistreat hundreds of billions of animals. We're not anywhere near trying to maximize value.
As Bentham's Bulldog puts it, "In order to have a near-best future, it isn't enough to get some things right. We need to get right the answer to every important moral question. And that's genuinely difficult."
The stakes are staggering. Losing out on half of future value because we only use half of space resources optimally incurs as much expected value loss as a 1/2 chance of extinction. This would be a catastrophe vastly worse, by many orders of magnitude, than all bad things that have ever happened.
"If there was an alien monster that was planning to swallow half the world—and continue swallowing at regular intervals so that in total half of future value got swallowed—seems like someone should look into stopping it."
The Lock-In Thesis
Critics might note that present-day actors cannot reliably affect the far future—the "cave men" objection. Bentham's Bulldog counters that we may be entering a period of persistent path dependence. Through the intelligence explosion, one single entity or a small number of entities could get the lion's share of global power and shape how the future goes for billions of years.
Bentham's Bulldog writes, "At some point, the world will decide the future plan for the universe." Such a pivotal action would continue without much change for billions of years, or alternatively, lock in a mechanism to block space development irreversibly.
Digital beings could live forever. A digital being could be made to carry out the will of the leader even after he dies. AGI could be used to enforce some plan for a very long period of time. This increases the odds that the ideology of the present will persist into the future.
Practical Levers
The commentary identifies concrete actions. Bentham's Bulldog writes, "Working to prevent a post-AGI autocracy." Specific actions include working to stop AI-enabled coups, helping preserve the Democratic structure of current Democracies, and slowing down the AGI development of autocracies by blocking chip sales to China.
Working to improve space governance ranks high. Space could allow one actor to seize control early, and it has almost all of the resources. Law could require sunset clauses on certain kinds of commitments made between countries, preventing indefinite enforcement of current priorities by AI.
Regulating AI to make sure that it's safe and prohibit one actor from seizing control seems important. Giving the AI robustly good moral values and preventing its scheming also seems valuable. Working on securing rights for AI would be important because almost all beings in expectation are digital.
Critics might note that around ten people are currently working on boosting the odds we get a near-best future in the billions and trillions of years to come. Bentham's Bulldog argues neglectedness makes progress easy: "Just as you could have a big impact by being one of a few thinkers planning early-stage nuclear strategy, how Locke had a big impact by being one of a small number of thinkers writing about how Democracies could develop, and how Bostrom's superintelligence book hugely impacted the AI conversation, neglectedness might make progress easy."
Bottom Line
The flourishing argument reframes longtermism's priority hierarchy but inherits the same epistemic vulnerabilities it critiques. Steering toward an optimal future requires moral knowledge we do not possess, and the lock-in thesis assumes present actors can reliably shape outcomes across geological timescales. Yet the neglectedness claim holds weight: if value is indeed fragile and the margin for optimal futures is narrow, concentrating effort on flourishing rather than mere survival may be the higher-leverage intervention.