← Back to Library

George hotz vs eliezer yudkowsky

George Hotz vs Eliezer Yudkowsky" is a debate that feels like watching two people argue about whether to worry about a tsunami while standing on different sides of the ocean. The piece, drawn from a live Twitter and YouTube exchange moderated by Dwarkesh Patel, captures something rarely seen in AI safety discussions: two sophisticated thinkers disagreeing not just about conclusions, but about the very framework for thinking about intelligence itself.

The Framework Debate

The most distinctive claim in this piece isn't about AI at all — it's about what kind of argument we're even allowed to make. George Hotz, who took an existentialism class in high school and later read Harry Potter and the Methods of Rationality, is essentially arguing that we can't predict how AI will develop: "I don't think either of these stories is right." He's pushing back against both the utopian and dystopian narratives — the idea that AI will save us or kill us. This lands because it reveals how little we actually know about the pathway to advanced intelligence.

George hotz vs eliezer yudkowsky

As Dwarkesh Patel writes, "this is an extraordinary claim and it requires extraordinary evidence" — a phrase that captures Hotz's skepticism of what he sees as speculative catastrophe scenarios. He's not saying recursive self-improvement is impossible; humanity has done it every time we've used tools to make better tools. What he's questioning is whether AI will suddenly crack the secret overnight and flood the world with diamond nanobots.

The Timeline Question

The debate's core tension centers on timing — specifically, how fast AI development proceeds and whether slow progress still kills us. Yudkowsky makes a crucial point: "if you've got a trillion beings that are you know sufficiently intelligent and smarter than us and not super moral I think that's kind of game over for us." Even if the timeline is slow — a 10-year process rather than 10 hours — he argues we're still dead because the endpoint matters more than the speed. Hotz disputes this: "I think that should be said out loud for the viewers" — acknowledging that Yudkowsky's position deserves direct engagement.

If you are at the end point where there's this like large mass of intelligence that doesn't care about you I think that we are dead and I worry that our successors will go on to do nothing very much worthwhile with the galaxies.

This is the piece's most striking claim: even slow AI development could be catastrophic if it leads to intelligence that simply doesn't care about human flourishing. The debate centers on whether timing matters because it determines when we should "shut it down" — and what exactly we'd be shutting down.

Evidence and Extrapolation

The piece effectively uses AlphaFold as a concrete example of AI progress. Yudkowsky notes: "I don't doubt that these systems are going to get better I do doubt that they are going to have magical or god-like properties like solving the protein structure prediction problem from you know the from Quantum field Theory." The argument is that current AI doesn't need divine powers to be dangerous — it just needs to surpass human capability in specific domains. Hotz responds using Magnus Carlson as an analogy: "Magnus Carlson can't make diamond Nanobots" — arguing that even superhuman chess intelligence doesn't imply general capability to create advanced technology.

The discussion of economic doubling times reveals practical disagreement about risk. Currently the world economy doubles roughly every 30 years, possibly 15 now. Yudkowsky asks: "at what point would you say this is why timing matters at what point would you say okay I agree if the world economy is doubling every second oh my God okay this is terrifying." This reveals how seriously both parties take the timeline question — it's not just abstract speculation but a practical matter of when humanity should respond.

The Alien Argument

In an unexpected twist, they discuss aliens: "if aliens were to show up here we're dead right for them that depends on Aliens." Yudkowsky agrees that anything crossing interstellar distances could "run you over without noticing" — but Hotz pushes back on whether intelligence scaling is actually that hard. This counterargument worth considering emerges naturally from their different backgrounds: one sees massive intelligence gaps as existentially dangerous, the other questions whether mere computational power implies alignment.

Bottom Line

The strongest part of this debate is its honest engagement with uncertainty. Both parties agree "the end point is much more predictable than the pathway" — a rare moment of consensus that reveals how little we actually understand about AI development trajectories. The biggest vulnerability is timing: Yudkowsky admits he made predictions in 2004 about protein folding that came true around 2020, but couldn't have predicted which form AI would take or exactly when. This uncertainty is precisely what makes the debate valuable — not for answers, but for forcing both parties to clarify their reasoning.

Deep Dives

Explore these related deep dives:

Sources

George hotz vs eliezer yudkowsky

by Dwarkesh Patel · Dwarkesh Patel · Watch video

okay we are gathered here to witness George hotz and Ellie azer yutkowski debate and discuss live on Twitter and YouTube AI safety and related topics you guys already know who George and Eleazar are so I don't feel that introduction is necessary I'm dwarkesh I'll be moderating I'll mostly stay out of the way except to kick things off by letting George explain his basic position and we'll take things from there George I'll kick it off to you sure so I took an existentialism class in high school and you'd read about these people SAR kierkegar niche and you wonder who were these people alive today and I think I'm sitting across from one of them now rationality and the sequences this whole field the whole less wrong Cinematic Universe have impacted so many people's lives in I think a very positive way including mine not only you a philosopher you're also a great storyteller there's two books that I've picked up and it was like crack I couldn't put them down one was Atlas Shrugged and the other one was Harry Potter and the methods of rationality it's a great book now those are fictional stories you've also told some stories pertaining to the real world one was a story you told when you were younger about how I remember the day I found staring into the singularity when I was 15. and it starts talking about Mars law and how Moore's Law is fundamentally a human law that says humans double the power of processors every two years so once computers are doing it's going to be two years but the next time it'll be one year and then six months and then three months and then 1.5 and so on and this is a hyperbolic sequence this is a singularity that's why it's called staring into the singularity then this document said that we were gonna I was gonna do wonderful things for us we were going to go colonize the universe we were going to go forth and do all things till the end of all ages then you changed your views and super intelligence does not imply super morality the orthogonality thesis I'm not going to challenge it is obviously a true statement then you kept the basic premise of the story The recursively self-improving boom criticality AI but instead of saving us ...