← Back to Library

An interview with OpenAI CEO sam Altman and Aws CEO matt garman about bedrock managed agents

This interview captures a pivotal moment where the AI industry's most powerful players recalibrate their alliances, revealing that the real bottleneck for enterprise adoption isn't model capability, but cloud infrastructure lock-in. Ben Thompson frames the conversation not just as a business deal, but as a necessary correction to a strategy that was actively stifling the very growth Microsoft sought to protect. The timing is critical: just days before this discussion, the executive branch's most influential tech partners amended a decade-long exclusivity pact, signaling a shift from walled gardens to open ecosystems.

The End of Exclusivity

Thompson opens by highlighting the strategic irony of the situation. He notes that while Azure held a "real competitive advantage" through exclusivity, that same advantage "was actively damaging Microsoft's investment in OpenAI." This is a sharp observation. By forcing enterprises to choose between their preferred cloud provider and the best AI models, the old arrangement created friction that competitors like Anthropic were eager to exploit. As Thompson writes, "given Anthropic's rapid growth this year, Microsoft needed to tend to their investment, even if it diminished Azure's differentiation."

An interview with OpenAI CEO sam Altman and Aws CEO matt garman about bedrock managed agents

The new deal is a pragmatic surrender to market reality. Microsoft retains its status as the primary partner, but the license is now non-exclusive, and crucially, Microsoft stops paying a revenue share to OpenAI. This financial restructuring is the hidden win for Microsoft; as Thompson points out, "their PnL is going to look a lot better without paying a revenue share to OpenAI." It's a clever accounting maneuver that softens the blow of losing exclusivity.

"Azure had a real competitive advantage thanks to being the only hyperscaler able to offer OpenAI models, but this also hindered OpenAI... Azure's exclusivity was actively damaging Microsoft's investment in OpenAI."

Critics might argue that this move simply accelerates a race to the bottom on pricing, but Thompson suggests the real value lies in scale. The administration of these models across multiple clouds allows OpenAI to bypass the "enterprise first and foremost about accessing models on their current cloud of choice" constraint. This aligns with the historical pattern of cloud computing, much like the early days of the Homebrew Computer Club, where accessibility and the removal of barriers to entry were more transformative than proprietary advantages.

The Startup Parallel

The conversation shifts to the broader implications for innovation, drawing a direct line between the cloud revolution of the 2000s and the current AI boom. Matt Garman, AWS CEO, reflects on the early days of cloud computing, noting that developers previously needed "millions of dollars to go build data centers." Now, with a credit card, they can access that same power. Thompson uses this to frame the current moment as the fourth great platform shift for startups, following the Internet, cloud, and mobile.

Sam Altman reinforces this, recalling how Y Combinator (YC) seemed crazy to investors who couldn't fathom funding a startup with "a few tens of thousands of dollars" when server costs were so high. The cloud changed that calculus entirely. As Altman puts it, "It was this complete change to what startups could do with small amounts of capital."

Thompson's commentary here is vital because it connects the technical shift to the economic one. The speed of adoption in AI is unprecedented. Altman notes that revenue expectations for YC companies are "changing every month," a phenomenon that "never used to happen before." This suggests that the barrier to entry isn't just lower; it's vanishing entirely, allowing small teams to build scaled businesses at a pace that defies traditional industry cycles.

"Startups generally win when there is a big platform shift and you can do things with a faster cycle time and much less capital than before... at the beginning of my career, I really witnessed that happen with the cloud, it actually feels quite directionally similar now watching what companies are doing building on AI."

However, a counterargument worth considering is whether this speed comes at the cost of stability. The rapid iteration Altman praises can lead to fragile business models that collapse when the initial hype cycle fades. While the cloud era produced enduring giants, the AI era might see a higher churn rate as the market saturates.

The Agent Economy and Hardware Reality

The core of the interview focuses on Bedrock Managed Agents, a new offering designed to make AI workflows accessible for organizations already deep in the AWS ecosystem. Thompson describes this as the "Codex in AWS" moment, but with a crucial distinction: while Codex worked because it was local, agents must operate across an organization's distributed data.

Garman explains that the goal is to remove the complexity of security and integration that usually plagues enterprise AI adoption. The discussion also touches on the hardware question, with both executives downplaying the importance of specific chips like Trainium for the average user. Thompson notes that "chips won't matter to most AI users," a claim that challenges the prevailing narrative of hardware supremacy.

This reframing is significant. It suggests that the competitive battleground is shifting from who owns the most advanced silicon to who can best orchestrate the software stack. As Thompson writes, "partnering makes sense relative to Google's focus on full integration." The strategy here is interoperability, not vertical integration. This mirrors the evolution of the colocation center industry, where the value moved from owning the physical space to providing the services that made that space useful.

"The easiest way to think about this offering is Codex in AWS... It's another thing entirely to figure out how to make agents work across an organization, and the goal of this offering is to make these workflows much more accessible for organizations who already have most of their data in AWS."

The implication is clear: the future of enterprise AI isn't about building your own model or buying your own chips; it's about integrating the best available tools into your existing workflow. This is a massive opportunity for AWS, which already hosts the majority of scaling startups.

Bottom Line

Ben Thompson's analysis succeeds in stripping away the hype to reveal the underlying economic logic: the era of cloud exclusivity is over, and the winners will be those who prioritize accessibility and integration over proprietary lock-in. The strongest part of this argument is the financial insight that Microsoft's decision to drop revenue shares is a strategic masterstroke that protects its bottom line while expanding its market reach. The biggest vulnerability, however, lies in the assumption that hardware commoditization will happen as quickly as predicted; if chip shortages or performance bottlenecks persist, the "chips don't matter" narrative could crumble. Readers should watch for how quickly enterprises adopt these agent-based workflows, as that will determine whether this is a genuine paradigm shift or just another layer of complexity.

"The speed of adoption and how fast people have grabbed onto the capabilities there, I think has surprised everyone."

Deep Dives

Explore these related deep dives:

  • Colocation centre

    Explains the physical infrastructure constraints that force enterprises to prioritize cloud providers where their data already resides, driving the demand for multi-cloud model access.

  • Homebrew Computer Club

    Provides historical context for how the open sharing of technology among hobbyists and startups eventually disrupted the proprietary mainframe model, mirroring the current shift from exclusive AI partnerships to open distribution.

  • Annapurna Labs

    This custom AWS chip is the specific hardware enabler allowing Amazon to compete with Azure's GPU dominance, making the technical feasibility of running OpenAI models on Bedrock possible without relying on NVIDIA's supply chain.

Sources

An interview with OpenAI CEO sam Altman and Aws CEO matt garman about bedrock managed agents

by Ben Thompson · Stratechery · Read full article

Good morning,

As I noted yesterday, today’s Stratechery Interview is early in terms of my timing — Tuesday instead of Thursday — and late in terms of delivery — 1pm Eastern instead of 6am — because the topic was embargoed. That embargo created a bit of a weird situation for me over the last several days:

Last Friday I conducted the following interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman about Bedrock Managed Agents, powered by OpenAI; naturally, one of my questions was about how this fit in with OpenAI’s deal with Microsoft giving Azure exclusive access to OpenAI models. Late Sunday I heard through the grapevine that Microsoft would announce something Monday morning; I wondered if it might be a preemptive lawsuit! On Monday Microsoft and OpenAI announced they had amended their agreement, allowing OpenAI to serve its products on other cloud providers, including AWS.

So here we are.

I think the Microsoft-OpenAI deal makes a lot of sense for both sides. Here are the bullet points of the new arrangement from Microsoft’s post:

Microsoft remains OpenAI’s primary cloud partner, and OpenAI products will ship first on Azure, unless Microsoft cannot and chooses not to support the necessary capabilities. OpenAI can now serve all its products to customers across any cloud provider. Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive. Microsoft will no longer pay a revenue share to OpenAI. Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap. Microsoft continues to participate directly in OpenAI’s growth as a major shareholder.

I think the most important point is the last one. Azure had a real competitive advantage thanks to being the only hyperscaler able to offer OpenAI models, but this also hindered OpenAI, particularly once it became clear that many enterprises cared first and foremost about accessing models on their current cloud of choice; I’ve been noting for a while that this was a real competitive advantage for Anthropic. In other words, Azure’s exclusivity was actively damaging Microsoft’s investment in OpenAI, and given Anthropic’s rapid growth this year, Microsoft needed to tend to their investment, even if it diminished Azure’s differentiation.

OpenAI, meanwhile, clearly sees AWS as a massive opportunity — so much so that they ...