← Back to Library

Minimum standards for taking AI seriously

Hamilton Nolan does not care whether artificial intelligence turns out to be the greatest technological revolution since the printing press or the greatest speculative bubble since the dot-com era. He considers the debate beside the point, and his argument is worth reading precisely because it refuses to play along. The real question, he writes, is not what AI will become but what happens to ordinary people while the tech industry figures it out.

The Noise Versus the Signal

Nolan observes that the public discussion around artificial intelligence has hardened into predictable tribal positions. On one side sit the investors and chief executives promising abundance. On the other sit skeptics who distrust the messengers so thoroughly that they dismiss the message entirely. Both postures are, in his view, self-defeating.

Minimum standards for taking AI seriously

"A thick cloud of hustlers, grifters, and the greediest monsters on earth surround the AI industry like flies surround a butchered corpse," Nolan writes. "Sure. This has been true with all new technologies. At the same time, notwithstanding the great volume of bullshit issuing forth from this cloud of self-interested actors, the underlying technology itself—railroads, electricity, the internet, whatever—does often have profoundly transformative effects on the world."

He draws an unflattering parallel to the housing debate. Proponents of liberalized zoning were long dismissed by progressives who found the messengers irritating or ideologically compromised. The policy was sound even when the advocates were insufferable. Eventually, the ideas broke through the tribal noise. Nolan argues that the same discipline should apply to artificial intelligence: separate the argument from the arguer, and evaluate both on merit.

As Dario Amodei himself has acknowledged in his widely discussed essay on the adolescence of technology, the economic disruption ahead could be unprecedented. That an industry insider is sounding the alarm does not invalidate the alarm.

Disaster Planning, Not Crystal Ball Reading

Nolan's central framework is disarmingly simple. Treat artificial intelligence as a disaster-planning problem. Hope for the best. Prepare for the worst. If the worst never materializes, no harm done. If it does and no preparation was made, the cost is catastrophic.

"In recent weeks, two pieces of writing about AI's future impacts have drawn the most attention," Nolan notes, citing essays by Amodei and another executive, Matt Shumer, both warning of mass white-collar job displacement. "Surely this will do the trick." He is being wry. The fact that industry insiders are publicly fretting about their own industry should count for something, regardless of how cynical one's view of their motives might be.

Amodei's warning is especially stark: artificial intelligence could displace half of all entry-level white-collar jobs within one to five years. He predicts GDP growth of ten to twenty percent annually alongside wealth concentration so extreme that a handful of individuals would command appreciable fractions of the global economy. "I am sympathetic to concerns about impeding innovation by killing the golden goose that generates it," Amodei writes, "but in a scenario where GDP growth is 10–20% a year and AI is rapidly taking over the economy, yet single individuals hold appreciable fractions of the GDP, innovation is not the thing to worry about. The thing to worry about is a level of wealth concentration that will break society."

"You don't need to like annoying tech people, you don't need to believe AI CEOs are motivated only by the public good, you don't need to use or 'like' AI, you don't need to have a crystal ball, you don't need to be a technological expert, you don't need to have an entire philosophical debate over the nature of consciousness. You just need common sense to see the direction this is going."

The Policy Prescription

Here is where Nolan's argument becomes most consequential. The policies he proposes to guard against the worst artificial intelligence scenarios look exactly like the policies that progressives have been advocating for decades. Unionize the workforce. Strengthen the social safety net. Tax concentrated wealth. Regulate dangerous industries.

"Unless you believe that the US government is capable of independently solving these problems (while being in the industry's pocket), it is clear that there must be some entities that exist for the sole purpose of protecting the workers who are exposed to having their livelihoods destroyed by AI," Nolan writes. "Those entities are unions."

He notes that fewer than ten percent of American workers currently have union representation, leaving the vast majority without any organized defense against unilateral technological disruption. Union contracts, he argues, are the very front lines of artificial intelligence regulation.

On wealth concentration, Nolan is blunt. "America's wealth inequality is already a crisis. AI could make it significantly worse. At some point democracy will fully crumble. Not allowing people to get that rich is necessary if we want to avoid having unaccountable godlike dictators."

And on regulation, his language intensifies: "There are more safety regulations involved in making a car than there are in releasing an AI model to the public that will, maybe, help people produce biological weapons or mass-produce child porn or who knows what else. This is an insane situation."

Counterpoints

Critics might note that Nolan's policy list, while sound, is unlikely to materialize at the scale or speed he envisions. The same political gridlock that has blocked labor law reform and wealth taxation for decades does not suddenly dissolve because a new technology makes those reforms more urgent.

Critics might also observe that treating artificial intelligence as a disaster-preparedness problem sounds reasonable until one considers that preparedness measures — heavy regulation, preemptive taxation — could slow development in ways that disadvantage American firms relative to less-regulated international competitors. The coordination problem here is genuinely difficult.

A third criticism: Nolan acknowledges but ultimately waves away the question of whether the current wave of artificial intelligence hype is, at its core, a financial bubble. If it is, the resulting crash could cause enormous economic damage before any of his proposed safeguards ever come online.

Bottom Line

Nolan's argument succeeds because it refuses the trap of either breathless techno-utopianism or reflexive dismissal. The policies he recommends are not radical inventions for a radical new era. They are the unfinished work of ordinary governance — labor protection, social insurance, progressive taxation, regulatory oversight — applied to a technology that makes their absence more dangerous than ever.

Deep Dives

Explore these related deep dives:

  • YIMBY

    The article mentions YIMBYism as a parallel debate about housing policy that faced similar polarization

  • Dot-com bubble

    The article discusses hype cycles and overhyping by Silicon Valley and Wall Street, requiring historical context on tech bubbles

Sources

Minimum standards for taking AI seriously

by Hamilton Nolan · · Read full article

Is AI going to revolutionize the world, upend the economy, and propel us into an unprecedented age of abundance, or, alternately, dystopia? Or is it all just a big bubble, a historic financial folly, a mania built on an overhyped fancy pattern-matching machine?

I do not know. But I can say confidently that the outcome of the AI era will be somewhere on the spectrum between the above options. The precise answer depends not only on technical matters of computing progress that I am not qualified to assess, but also on the chaotic swirl of global events that make the future difficult for anyone to predict. Partly because of this uncertainty—and also because of existing political tribalism, and also because of the history of Silicon Valley and Wall Street overhyping things for their own benefit, and also because of the nature of capitalism—the public discussion around how we should collectively prepare for our AI future has become polarized in an unhelpful way. On one side we have people who have a ton of money invested in AI saying “it will change everything” and on the other side we have people who hate those types of people saying “I doubt that, because you guys are greedy, untrustworthy liars with an enormous personal stake in getting everyone to believe the hype.”

Let me point out that both things can be true. A thick cloud of hustlers, grifters, and the greediest monsters on earth surround the AI industry like flies surround a butchered corpse. Sure. This has been true with all new technologies. At the same time, notwithstanding the great volume of bullshit issuing forth from this cloud of self-interested actors, the underlying technology itself—railroads, electricity, the internet, whatever—does often have profoundly transformative effects on the world. (Even if it takes longer and unfolds in a different way than the grifters said.) This, in fact, is the most likely path for AI. The good news is that, unless you are a tech investor or tech journalist or AI company engineer, the precise specifics of how and when every advance occurs and who wins the race to each specific benchmark and how much money they make off of it… do not really matter. What matters to the vast majority of people in America and around the world is: How will AI change the economy and the distribution of power in our society? And, if ...