Ben Thompson returns to Google Cloud CEO Thomas Kurian for the fourth consecutive year, and the framing has shifted dramatically. Last year's keynote centered on unified architecture as a theoretical construct. This year, Kurian emphasizes that use cases are "no longer theoretical or pilots but running at scale for real users." The distinction matters: AI has moved from demonstration to deployment.
The Agentic Shift
Thompson opens by noting that Kurian's keynote returned to the unified architecture theme but with a crucial difference. "Kurian emphasized that the use cases were no longer theoretical or pilots but running at scale for real users," Thompson writes. He also notes that Google itself runs on the same infrastructure as Google Cloud — a point Sundar Pichai reinforced when discussing capex investment.
The interview's central question: what has changed from last year to make agents viable? Kurian identifies three or four shifts. "The first is capabilities of models — Gemini is able to reason much more effectively as new versions of Gemini have come out," Kurian says. Second, models can now "maintain long-running memory," which is essential for agents automating tasks over many steps. Third, "their interaction with tools and the rest of the world" has improved through abstractions like skills, tools, and MCPs.
The Model Context Protocol reference is notable. MCP emerged as an open standard for connecting AI models to external systems — essentially a universal adapter for AI tool use. Kurian's mention signals Google's adoption of this interoperability layer, though he frames it as part of Google's broader abstraction strategy rather than external standard adoption.
"All of them have advanced and so the core capabilities that the models themselves have gotten a lot better, the capability and the ability to use tools and interact with the rest of the world has become a lot better," Kurian says. Thompson's editorial note: "the word 'agent' may appear in every single paragraph" of Kurian's blog post.
Do Gemini Agents Actually Work?
Thompson presses on capability, not just infrastructure. He notes Gemini was "the belle of the ball four months ago" but recent conversation has centered on Anthropic and Claude. "What's your feeling about your actual capabilities, not just agents in general?" Thompson asks.
Kurian's response sidesteps direct capability claims. "I've always said when people ask us about it, I always say, 'Let our customers talk about it, rather than we talk about it,'" Kurian says. He lists 500 customers presenting at Next: Citigroup, Bosch, eBay, Virgin Voyages, Walmart, FDA, Comcast, Unilever.
The examples are specific. Citi uses agents for wealth advisory — researching investment priorities based on customer goals. Comcast uses agents for consumer services: "repair, scheduling appointments, dispatching field technicians, there's very complex flows that have many, many steps and interact with you with a lot of complex systems." Thompson paraphrases the complexity: booking appointments requires calendar lookup, spare parts inventory, technician scheduling, inventory updates.
Having constraints requires the model to be even more intelligent.
Kurian's most striking claim: constraints improve model performance. "Just being perfectly frank, Ben, having constraints requires the model to be even more intelligent," Kurian says. The reasoning: complex process flows have countless idiosyncratic situations. "You cannot a priori program every one of them. You need to teach the model to use, for example, to be able to spin up a virtual machine and use a tool in the virtual machine to generate code to deal with some of these situations." The most sophisticated capability: giving models high-level instructions and letting them "goal seek an outcome." Thompson captures the example: "I need to schedule this appointment" with 19 different possible conditions.
The Integration Advantage
Thompson asks about the working relationship with DeepMind. "We have a harness in which all these flows journeys, for example, as we see them with customers, we put them into the harness and they get into the reinforcement loop for Gemini," Kurian says. The loop is "very tight" — Kurian just came from a meeting with Demis Hassabis's team.
Kurian claims Google is unique in three ways. First, "we have the whole stack of AI technology." Second, infrastructure — classical compute and TPUs. Third, business context: "Our strength in data processing gives us some technology that we're going to be talking about next week around something we call Knowledge Catalog, think of it as your global dictionary for all information within the company." The Knowledge Catalog connects to Google's knowledge graph heritage — the same structured data infrastructure that powered Google Search's understanding of entity relationships, now repurposed for enterprise context.
Thompson raises the classic big-company concern: sprawl, competing priorities, internal customers. "How do you balance having a point of view versus getting stuck in the muck?" Kurian's answer: "Every product that Google has is on the same Gemini version, on the same day, on the same hour, every one of us is using the same harness." Thompson asks if the harness gets pulled in "50 million directions." Kurian: "Absolutely not, we are very focused on working with Demis and Koray Kavukcuoglu who lead our team to make sure they see the sophistication of these scenarios and we work literally side-by-side, hour-to-hour with them."
A counterargument worth considering: Google's integrated stack could be a liability, not an advantage. The "same harness, same version" discipline might prevent fragmentation, but it also means slower iteration when one component needs to move faster than the others. Frontier labs like Anthropic can optimize their entire stack for agent performance without coordinating across dozens of product teams. Kurian's confidence in the feedback loop assumes the loop closes quickly — but in a company of Google's size, hour-to-hour collaboration may still translate to week-to-week deployment.
Security and Cyber
Thompson notes Kurian was "careful to raise" security before discussing agents. Kurian frames AI and cyber as "very contextual now" with concerns that "AI will accelerate the speed of cyber attacks on people's systems." Google is "bringing AI and our cyber technology together to protect, including the integration of Wiz" — the security startup Google acquired.
The security framing is strategic. Enterprises won't deploy agents handling complex process flows if they fear hijacking or data exposure. Kurian's point: "you don't want information that's critical to your company exposed on the Internet, you don't want your model to get attacked because now it's handling very complex process flows, you don't want it hijacked." The Wiz integration signals Google treating security as foundational, not additive.
Bottom Line
Thompson's interview reveals Google Cloud's agent strategy: full-stack integration with tight DeepMind feedback loops, enterprise-scale deployments, and security as a prerequisite. Kurian's strongest argument is the constraint-intelligence claim — that complex enterprise flows force models to develop genuine reasoning, not just pattern matching. The customer examples (Citi wealth advisory, Comcast field service) are concrete enough to test.
The biggest vulnerability: Thompson's observation that conversation has shifted to Anthropic and Claude. Kurian's "let customers talk" response is prudent but doesn't address the perception gap. Google's integrated stack is a genuine advantage if the feedback loop closes fast enough. If it doesn't, the "same harness" discipline becomes coordination overhead. Watch whether the 500 customer stories at Next demonstrate actual business outcomes — not just pilot deployments — and whether Gemini's agent capabilities match the infrastructure claims.