← Back to Library

Weekly dose of optimism #179

The Optimism Engine

Packy McCormick's 179th Weekly Dose arrives during market turbulence, but the thesis remains unchanged: innovation accelerates regardless of stock prices. This is optimism as a discipline, not a mood. McCormick frames the current moment as a divergence between financial pessimism and technological acceleration — a gap that defines the weekly's entire purpose.

Model Wars Heat Up

The centerpiece is the simultaneous release of Anthropic's Claude Opus 4.6 and OpenAI's GPT-5.3-Codex. McCormick treats this as more than product updates — it's evidence of recursive improvement. Both labs used their own AI agents to build the next generation.

Weekly dose of optimism #179

Packy McCormick writes, "Taken together, we found that these new capabilities resulted in powerful acceleration of our research, engineering, and product teams." This is the mechanism fast takeoff believers point to: models so smart they make the next models smarter.

On Opus 4.6, McCormick notes it's "definitely smarter (although thankfully it's still a shitty writer)." The GPT-5.3-Codex iteration shows tangible progress: "I told 5.3 to throw out that trash and make me something that looked better, and it actually did a decent job in one shot."

Anthropic's Super Bowl commercials mocking OpenAI's planned ads add theatrical tension to the technical race. Jordi Hays and others view the ads as deceptive, but McCormick acknowledges they're "super entertaining."

"I don't know what to say other than have fun playing with your new geniuses this weekend."

Critics might note that "shitty writer" models still produce most online content, and design improvements in one-shot prompting remain fragile across different tasks.

Rocks That Think

Eric Jang's essay "As Rocks May Think" provides the intellectual backbone. Jang, VP of AI at 1X Technologies and former Google Brain robotics lead, traces machine reasoning from symbolic logic systems through Bayesian nets to AlphaGo and today's reasoning models.

Packy McCormick writes, "Jang walks through the intellectual lineage of machine reasoning, from symbolic logic systems that collapsed when a single premise was wrong, through Bayesian belief nets that got tripped up in compounding uncertainty, to AlphaGo's breakthrough combination of deductive search and learned intuition, and finally to today's reasoning models, like Opus 4.6 and GPT 5.3."

The practical takeaway: Jang now runs parallel AI research sessions overnight instead of training jobs. He suspects researcher-level compute will soon be widely available, and when it arrives, demand will explode.

The infrastructure numbers are staggering. Google anticipates $85 billion in 2026 CapEx. Amazon's even larger projection sent its stock tumbling. McCormick invokes Lee Kuan Yew's observation that air conditioning changed civilization by making tropics productive — air conditioning consumes 10% of global electricity, data centers less than 1%. If thinking machines deliver similar productivity gains, inference compute demand will be enormous.

Packy McCormick writes, "The sell-off is ugly, but if Jang is right, all of that buildout and much more is going to be put to use." When McCormick asked Claude Opus 4.6 about the selloff, it responded: "if the bottleneck is inference compute, build the data centers. Vertical integration, baby."

Critics might note that CapEx projections assume sustained AI revenue growth that hasn't yet materialized for most enterprises, and 60% of CEOs report AI projects haven't delivered positive ROI.

Biological Computing Enters the Arena

Anduril's AI Grand Prix drone competition received an unexpected entry: a team planning to use cultured mouse brain cells as the AI software. Cortical Labs' CL1 device — $5,000, lab-grown neurons on electrode arrays, kept alive in life-support housing — learned to play Pong in five minutes with 800,000 cells in 2022.

Packy McCormick writes, "At first look, this seems against the spirit of the software-only rules. On second thought, hell yeah." The neurons run on a few watts and learn from far less data than conventional AI.

Critics might note that biological computing remains experimental and scaling beyond Pong or simple drone navigation faces fundamental challenges in stability, reproducibility, and ethical regulation.

Waymo's Gradual, Then Sudden

Waymo's $6 billion raise values the company at $26 billion — the largest private investment ever in autonomous vehicles. The metrics show the "gradually, then suddenly" pattern: 127 million fully autonomous miles, 90% reduction in serious injury crashes versus human drivers, 400,000 rides per week across six US metro areas.

Packy McCormick writes, "Nearly 40,000 Americans died in traffic crashes last year. The leading causes, things like distraction, impairment, fatigue, are all fundamentally human problems. Waymo doesn't have those problems."

The rollout timeline: 20+ additional cities in 2026, including Tokyo and London. McCormick closes with a personal observation: "My kids are never going to get their drivers' licenses, are they?"

Critics might note that Waymo's six-city footprint remains limited compared to the complexity of nationwide deployment, and regulatory hurdles in international markets like Tokyo and London could delay the 2026 timeline.

The Electronaissance

Contrary Capital's Tech Trends Report provides the macro backdrop. AI adoption curves outpace internet growth: OpenEvidence hit 300,000 active prescribers in 11 months (Doximity took 11 years). ChatGPT reaches 800 million weekly active users. Coding AI tools approach $1 billion in ARR.

Packy McCormick writes, "AI companies are reaching revenue milestones 37% faster than traditional SaaS companies did."

Energy projections: 35-50% US electricity growth by 2040, $1.3 trillion AI-related CapEx by 2027, $1-5 trillion global data center spending by 2030. Wind and solar are the fastest-growing energy sources. US fab capacity projected to grow 203% from 2022 to 2032.

Frontier developments: lunar data storage, underwater data centers with 8x fewer hardware failures, Space Force planning a 100kW nuclear reactor on the moon by decade's end.

Packy McCormick writes, "We are living in a sci-fi novel. What a time to be alive."

Critics might note that infrastructure bottlenecks — aging grids, water stress around data centers, supply chain constraints — could compress these timelines significantly.

Bottom Line

McCormick's optimism rests on measurable acceleration: model capabilities, autonomous vehicle safety, energy infrastructure, adoption curves. The counterweight is equally measurable: CapEx without proven ROI, biological computing still experimental, regulatory friction unaddressed. The verdict: optimism is justified when tied to specific metrics, but the gap between infrastructure investment and enterprise returns remains the critical uncertainty. This weekly dose works because it documents the buildout, not because it guarantees outcomes.

Deep Dives

Explore these related deep dives:

  • Anthropic

    The article discusses Anthropic's new Opus 4.6 model and competition with OpenAI

  • OpenAI

    The article discusses OpenAI's GPT-5.3-Codex model release and AI race with Anthropic

Sources

Weekly dose of optimism #179

by Packy McCormick · Not Boring · Read full article

Hi friends,

Happy Friday and welcome back to the 179th Weekly Dose of Optimism!

We started writing the Weekly Dose during the 2022 bear market because there was a disconnect between the incredible things we saw being built and the (largely market-driven) pessimism. So this week is great. We were born in the darkness.

Even as the markets have vomited, the innovation has continued apace. Zoom out.

We have another jam-packed week of optimism, including four Extra Doses below the fold for not boring world members.

Let’s get to it.

Today’s Weekly Dose is brought to you by… Guru.

Your team is probably already using AI for everything: research, customer support, product decisions. Just one problem… AI is confidently wrong about your company knowledge 40% of the time.

While everyone races to deploy more AI tools, they’re building on a foundation of outdated wikis, scattered documents, and tribal knowledge that was never meant to power automated decisions.

Guru solved this for companies like Spotify and Brex. They built the only AI verification system that automatically validates company knowledge before your AI agents use it. Think of it as quality control for your AI’s brain.

The companies that figure this out first will have AI that actually works. The ones that don’t waste valuable human time cleaning up expensive mistakes.

(1) Introducing Claude Opus 4.6 and Introducing GPT-5.3-Codex

Anthropic and OpenAI, respectively

The race between Anthropic and OpenAI to build the smartest, most useful thinking machines is heating up, and it’s riveting. The day after Anthropic released its Super Bowl commercials, which make fun of OpenAI for planning to introduce ads into its product (which many people, including Jordi Hays, think are a bit deceptive, but which are super entertaining)…

… both companies dropped their newest, smartest models. Anthropic released Opus 4.6 and OpenAI released GPT-5.3-Codex (Codex is its coding model/app).

Anthropic’s Opus 4.6 is for everyone: better at coding, plans longer, runs financial analyses, does research, etc… I’ve been playing with it and it’s definitely smarter (although thankfully it’s still a shitty writer).

OpenAI’s very-OpenAI-named GPT-5.3-Codex is for coding. It slots right into the Codex app they released this week. I had 5.2 build a website for not boring, and it was very cool that it could build it, but no matter how hard I prompted, the design was trash. I told 5.3 to throw out that trash and make ...