← Back to Library

Greg brockman

Rick Rubin's interview with Greg Brockman strips away the corporate gloss of the artificial intelligence sector to reveal a raw, human story about mission, betrayal, and the terrifying speed of technological evolution. Unlike standard industry profiles that focus on quarterly earnings or product roadmaps, this conversation centers on the visceral experience of watching a decade of life's work unravel in a single video call, offering a rare glimpse into the emotional architecture of a tech revolution.

The Human Cost of Scaling

Brockman, a co-founder of OpenAI, describes a profound shift in how artificial intelligence is integrated into daily life, moving from abstract tools to intimate health companions. He notes that his wife, who suffers from hypermobile Ehlers–Danlos syndrome—a condition often misdiagnosed for years due to its complex, multi-system nature—now relies on chatbots to synthesize her symptoms in ways traditional specialists cannot. "My wife has complex medical conditions... and as we put those symptoms into chat, it would be able to figure out pretty immediately," Brockman explains. This anecdote is not merely a product testimonial; it highlights a critical gap in modern healthcare where generalist AI models can sometimes outperform fragmented human specialization.

Greg brockman

Yet, the most striking revelation is Brockman's personal abandonment of his lifelong technical rituals. A self-described "late adopter" who prided himself on using command-line tools like Emacs and terminals, he admits to a sudden, total conversion. "I've been a hermit... I use my terminal. I use my Emacs, like all these like tools that I I grew up with and I've just abandoned all of that now," he tells Rubin. This surrender to "codecs"—voice-based interfaces—signals a paradigm shift where the barrier to entry for powerful AI has dropped so low that even its architects no longer need to code to wield it. The speed of this change mirrors the rapid, often disorienting progression seen in other complex systems, much like how Goodhart's law suggests that once a measure becomes a target, it ceases to be a good measure; here, the metric of "efficiency" has forced a total reinvention of the user interface itself.

"If we have some signs of life, some application that kind of works right now, one year from now you should expect it to be excellent."

Brockman argues that the industry's perception of AI development as merely "scaling up" is a dangerous oversimplification. He insists that the internal reality involves relentless, granular improvements in data quality, hardware reliability, and tokenization. "Every single part of the process, we are always upleveling," he asserts. This focus on the invisible infrastructure is crucial for understanding why the technology is advancing so rapidly. However, critics might note that this internal focus on "attention to detail" often clashes with the external reality of rapid deployment, where safety and ethical guardrails can lag behind the sheer velocity of capability gains.

The November Crucible

The narrative pivots sharply when Brockman recounts the events of November, the moment the board of directors ousted Sam Altman, the company's co-founder and CEO. Brockman describes the atmosphere not as a sudden explosion, but as the result of long-suppressed tensions. "One thing we did wrong was that we let conflict brew... if you let it fester, that is always going to be more painful down the road," he reflects. This admission serves as a stark lesson in organizational dynamics: avoiding hard conversations in high-stakes environments often leads to catastrophic outcomes.

The scene of his own removal is rendered with cinematic clarity. Brockman was coding when he received a summons to a video call. "I noticed it was the board except for Sam. I'm very surprised. So I click join and then they tell me the news," he recalls. The board informed him that Altman was fired and that he, too, was being removed from the board, though they expressed a desire for him to remain as an employee. Brockman's reaction was immediate and decisive. "I knew in that moment it wasn't right," he says. "After hanging up the call, I told my wife and said, 'We have to leave.'" He prepared her for the financial devastation, telling her to assume their equity would go to zero. "We had put off children so I could really focus on this... and that I think we both just believe in the mission so much," he adds, underscoring the depth of the personal sacrifice involved.

"The mission is bigger than the corporate entity. Yes. That the mission is something I'm pursuing that I care about and I want to pursue in the form that I believe can most accomplish it."

What followed was a spontaneous mutiny. Brockman did not wait for a new job offer; he quit immediately. "I said, 'No, Sam, we are going to start a new company,'" he recounts. The response from the broader team was overwhelming. "It was truly humbling... the number of people reached out saying, 'I don't know if you're planning on doing something next, but I want to go with you,'" he says. The office became a fortress of solidarity, with employees canceling Thanksgiving travel to stay and support the cause. "People packed the office... it was just like a thing. It was almost like... we just need to get out of here," he describes, capturing the chaotic energy of a team realizing their shared values were under threat.

The Unresolved Tension

Brockman's account of the board's eventual decision to replace the interim CEO with a different figure, only to have the company reject it, illustrates the sheer power of a unified workforce. "The company went wild and just rejected it and said this is absolutely wrong," he notes. This moment of collective action forced the board to reconsider, leading to a resolution that saw Altman and the original leadership team return. Yet, the scars remain. Brockman emphasizes that the lesson learned was the necessity of addressing conflict head-on. "Anytime anything grows so quickly, there's a lot of confusion and complication comes with the territory," he observes.

While the narrative is one of triumph, a counterargument worth considering is whether the board's initial actions were a necessary check on the company's rapid expansion or a failure of governance that nearly destroyed the institution. Brockman frames the board's move as a misunderstanding of the mission, but the fact that the conflict was allowed to "fester" for a year and a half suggests a systemic failure in communication that goes beyond simple personality clashes. The resolution, while happy, leaves open the question of whether the underlying structural tensions have truly been resolved or merely paused.

"It was the people who were just like all right like we see what happened. This is not right. We need to go do something different."

Bottom Line

Brockman's story is a powerful testament to the idea that the future of artificial intelligence is not just about code, but about the people who build it and the values they hold. The strongest part of this argument is the unflinching honesty about the human cost of rapid innovation and the critical importance of organizational culture. Its biggest vulnerability lies in the assumption that the "mission" can always be separated from the "corporate entity," a distinction that may blur as the stakes of AI development continue to rise. Readers should watch for how the lessons from this November crisis shape the governance of AI companies in the years to come, particularly as the industry grapples with the balance between speed and safety.

Deep Dives

Explore these related deep dives:

  • Goodhart's law

    Brockman describes the critical challenge of ensuring that scaling metrics and evaluations actually correlate with genuine capability improvements rather than just optimizing for the wrong signal.

Sources

Greg brockman

by Rick Rubin · Tetragrammaton · Watch video

Tetrogrammaton. Tetro. One thing that's really changed over the course of 2025 was people started to use chat GBPT for much more personal very intimate applications. For example, so my wife has complex medical conditions including hyper mobile syndrome which took many years for us to get diagnosed and as we put those symptoms into chachi it would be able to figure out pretty immediately.

But the thing is that every doctor has their own specialty rather than one doctor who can see across everything and she uses chatbt to manage her health all the time. >> Great. >> For me, the funny thing is I'm actually a late adopter of our own technologies usually and I usually test them and I stress test them in all sorts of ways. >> And it's funny for early versions of our models, I would usually like try to break some of their filters and so I would swear at them and yell at them and my wife is always like, "He's just kidding.

telling me to be nice to the AI. And I think that for me, I actually have been someone who's almost very set in my ways. And the first time this has changed is very recently with codecs and really starting in December, like I've been a kermagin. I've been I've got my way of doing things.

I use my terminal. I use my Emacs, like all these like tools that I grew up with and I've just abandoned all of that now. Wow. >> I'm just using codecs.

That's revolutionary. >> It really is. >> You sound like you were set in your ways. >> I really was.

Yes. >> What changes with each new model? >> Everything. And the way to think about it is that we from the outside perception is, oh, you're just scaling up the models.

You're just doing this kind of dumb thing. On the inside, every single part of the process, we are always upleveling. Like the thing that works in machine learning and that machine learning rewards is attention to detail. And so you really want to make sure that all of the scaling is right.

You want to make sure that the systems for example GPUs failing that happens. And so how do you detect if you have a run of 100,000 GPUs, how do you detect which GPU is ...