← Back to Library

Exponential view #558: Davos & reinventing the world; OpenAI's funk; markets love safety; books are…

Azeem Azhar arrives from Davos with a provocative thesis: the global economy isn't just shifting gears; it is undergoing a fundamental operating system upgrade. While most observers fixate on the noise of trade disputes or the latest model release, Azhar identifies a silent, structural transformation where energy, intelligence, and biology have moved from a regime of scarcity to one of learning. This is not a standard industry report; it is a civilizational audit that challenges the very logic of how we compete.

The End of Scarcity

Azhar argues that the old rules of economics, built on the premise of finite resources, are collapsing. He writes, "Between 2010 and 2017, three fundamental inputs to human progress – energy, intelligence, and biology – crossed a threshold. Each moved from extraction to learning, from 'find it and control it' to 'build it and improve it.'" This distinction is crucial. It suggests that the game is no longer about hoarding what exists, but about generating more value through iteration. As Azhar puts it, "In each of the three crossings, a fundamental input to human flourishing moved from a regime of extraction, where the resource is fixed, contested, and depleting, to a regime of learning curves, where the resource improves with investment and scales with production."

Exponential view #558: Davos & reinventing the world; OpenAI's funk; markets love safety; books are…

The author categorizes the global response to this shift into three archetypes: the Hoarder, the Manager, and the Builder. He notes that the loudest voices currently belong to the Hoarders, who cling to a zero-sum worldview, while the Managers try to patch the old system. The Builders, however, are the ones actually creating value. "The invitation of this moment? Not to mourn the fictions, but to ask: what was I actually doing that mattered, and how much more of it can I do now?" This reframing is powerful because it moves the conversation from anxiety about loss to strategy for growth. Critics might argue that this optimism glosses over the immediate, brutal transition pains for workers in legacy industries, but Azhar's focus is on the long-term trajectory of the system itself.

The loudest voices in public right now are hoarders, the most respectable are managers, and the builders are too busy building to fight the political battle.

The OpenAI Paradox

Shifting to the corporate battlefield, Azhar delivers a stark assessment of the artificial intelligence sector. He observes that while OpenAI dominated the chatbot era, the market has pivoted to an "agent economy," a space where Anthropic is currently outperforming. The financial stakes are massive, yet the strategy is questionable. Azhar writes, "OpenAI is still looking for other revenue pathways. In February, ChatGPT will start showing ads to its 900 million users – betting more on network effects than pure token volume."

He highlights a critical tension in this strategy. Demis Hassabis of Google expressed surprise at the move, noting that "when your agent has third-party interests, it's not your agent anymore." This suggests that monetizing through ads could fundamentally degrade the utility of the AI assistant, turning a trusted tool into a salesperson. Azhar points out that OpenAI's path to profitability relies on massive capital expenditure, projecting "$110 billion in free cash outflow through 2028," a figure that seems precarious given their current market position. In contrast, competitors like Google and Anthropic are already securing lucrative partnerships in high-value sectors like drug discovery, where Google's Isomorphic Labs has secured "~$3 billion in pharma partnerships."

The cultural cost of this financial pressure is also evident. Azhar notes that the need for rapid productization is driving away top research talent, citing the departure of key architects who felt they "couldn't do" the research they wanted at the company. This creates a vulnerability: "Researchers generate the alpha, and research requires time, patience and not a lot of pressure from your product team." If the race is won by those who can sustain deep research, OpenAI's aggressive timeline may be its undoing.

Ethics as a Competitive Advantage

Perhaps the most counter-intuitive argument in the piece is the redefinition of safety. Conventional wisdom suggests that safety measures slow down innovation, creating a prisoner's dilemma where labs race to the bottom. Azhar challenges this, arguing that "alignment generates trust, trust enables autonomy, and autonomy unlocks market value." He points to the fact that Anthropic, often cited as the most safety-focused lab, is the one allowing its agents to take over entire computer systems. "Why would the safety-focused lab allow models to do the most dangerous thing they're currently capable of? Because their investment in alignment produced a model that can be trusted with autonomy."

This creates a flywheel where safety investments lead to higher market value, while companies that cut corners face regulatory scrutiny and enterprise rejection. Azhar writes, "The most aligned model becomes the most productive model because of the safety investment." This flips the script on the "safety as a tax" narrative. However, a counterargument worth considering is that this dynamic assumes a rational market that rewards safety; in a chaotic, short-termist market, the lab that moves fastest and ignores risk could still capture the most initial market share before the consequences of misalignment become apparent.

The Robotics Flywheel

Finally, Azhar turns to robotics, identifying a convergence of scaling laws and Wright's law that could accelerate adoption faster than predicted. He describes a "flywheel" effect where every deployed robot generates data, making the next generation smarter. "More deployed robots generate more varied action data. The next generation of models absorbs this variety and becomes more capable of unlocking larger markets worth serving." This data scarcity, once a bottleneck, is being solved by the very act of deployment.

The cost dynamics are equally compelling. Azhar notes that the cheapest humanoid robots are already down to "$5,000 per unit," and he argues they will eventually cost "closer to an iPhone than a car." This price point, combined with the labor shortage in sectors like construction, suggests that the economic case for robotics is about to become undeniable. The application is immediate: "AI datacenter construction will kick off the flywheels," as robots take over repetitive post-shell work where human labor is scarce.

Bottom Line

Azhar's strongest contribution is the conceptual shift from a scarcity-based worldview to a learning-based one, providing a coherent lens for understanding disparate events from Davos to the AI lab. His analysis of OpenAI's strategic drift and the emerging economic value of safety is particularly sharp, offering a clear verdict on who is winning the next phase of the AI race. The piece's main vulnerability lies in its assumption that market forces will naturally align with long-term safety and ethical deployment, a premise that history suggests requires active institutional guardrails to hold true.

Alignment generates trust, trust enables autonomy, and autonomy unlocks market value. The most aligned model becomes the most productive model because of the safety investment.

Sources

Exponential view #558: Davos & reinventing the world; OpenAI's funk; markets love safety; books are…

by Azeem Azhar · · Read full article

Hi all,

I just got back from Davos, and this year was different. The AI discussion was practical – CEOs asking each other what’s actually happening with their workforces, which skills matter now. At the same time, I saw leaders struggling to name the deeper shifts reshaping our societies. Mark Carney came closest, and in this week’s essay I pick up his argument and extend it through the Exponential View lens.

Enjoy!

Davos and civilizational OS.

Mark Carney delivered a speech that will echo for a long time, about “the end of a pleasant fiction and the beginning of a harsh reality.” Carney was talking about treaties and trade but the fictions unravelling go much deeper.

Between 2010 and 2017, three fundamental inputs to human progress – energy, intelligence, and biology – crossed a threshold. Each moved from extraction to learning, from “find it and control it” to “build it and improve it.” This is not a small shift. It is an upgrade to the operating system of civilization. For most of history, humanity ran on what I call the Scarcity OS – resources are limited, so the game is about finding them, controlling them, defending your share. This changed with the three crossings. As I write in my essay this weekend:

In each of the three crossings, a fundamental input to human flourishing moved from a regime of extraction, where the resource is fixed, contested, and depleting, to a regime of learning curves, where the resource improves with investment and scales with production.

At Davos, I saw three responses: the Hoarder who concludes the game is zero-sum (guess who), the Manager who tries to patch the system (Carney), and the Builder who sees that the pie is growing and the game is not about dividing but creating more. The loudest voices in public right now are hoarders, the most respectable are managers, and the builders are too busy building to fight the political battle. The invitation of this moment? Not to mourn the fictions, but to ask: what was I actually doing that mattered, and how much more of it can I do now?

Full reflections in this week’s essay:

Finding new alpha.

OpenAI was the dominant player in the chatbot economy, but we’re in the agent economy now. This economy will be huge, arguably thousands of times bigger1 but it’s an area OpenAI is currently not winning: Anthropic ...