← Back to Library

A conversation with Claude

Noah Smith's conversation with Claude isn't about AI being dangerous or safe—it's something far more interesting: it's about what we'd actually discover if we could see beyond our own cognitive limits. His central claim is striking: "human science is all about compressibility," and there are patterns in nature too complex for humans to hold in their minds, but useful nonetheless.

Smith opens with a confession that's almost endearing. He's less interested in the dramatic AI confrontations that Vanity Fair or Bernie Sanders published—he wanted the substantive discussion. The piece reads like "a late-night discussion in the hall of a freshman dorm," he admits, and he's okay with that comparison. For many readers, those late-night conversations are some of the most transformative they'll ever have.

A conversation with Claude

The Real Stakes

What makes this conversation compelling isn't the AI safety debate—it's the epistemological claim at its heart. Smith writes: "There could be more complex patterns in nature — too complex for a human to hold in their minds, or even notice in the first place, but stable and useful nonetheless."

This is the article's boldest argument, and it deserves attention because it's not science fiction. It's a serious claim about how science actually works.

Smith uses LLMs as his primary example—"we figured out how to create and apply human language without ever being able to write down simple 'laws' of how it worked." The implication is that if we missed something this fundamental about language, what else have we missed? What other complex-but-useful patterns exist in materials science and biology that we're simply not smart enough to notice?

There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless.

The historical parallel is worth noting. In 1986, researchers discovered YBCO (yttrium barium copper oxide), and superconductors have been struggling with commercial applications ever since—the gap between "it works in a lab" and "you can make wire out of it" remains brutal. Smith's point isn't just about discovery; it's about comprehension.

The Honest Skepticism

But here's what makes the conversation interesting: Claude pushes back. Near the end, Smith quotes the AI's response on whether these discoveries could be communicated to humans: "Probably not." And then he adds the observation that "just like a dog will never be able to understand quantum mechanics, humans may never be able to understand some of the scientific principles that AI discovers and harnesses."

This is where the piece becomes genuinely thought-provoking. Smith isn't just saying AI will find things—he's saying we won't understand what it finds. The question shifts from "what will AI discover?" to "can we even comprehend what we've discovered?"

The counterargument here is worth considering. Critics might note that this framing risks sounding like technological mysticism—assuming AI will find something we can't understand without specifying what that something would be or how we'd verify it's useful. Smith's essay three years ago attempted to answer this, but the gap between "complex pattern exists" and "we can't communicate it" remains wide.

The Materials Science Preview

What makes this conversation genuinely valuable is the concrete discussion of what's actually coming in materials science. Claude's list isn't speculative hand-waving—it includes specific technologies like room-temperature superconductors (which would be "civilization-altering"), solid-state electrolytes for batteries, direct air capture sorbents, and topological materials.

The timeline estimates are refreshingly honest. For solid-state electrolytes, Smith notes they're "essentially already here" in terms of proof of concept—Toyota, Samsung SDI, and QuantumScape are targeting late-2020s production. But for room-temperature superconductors? "15-30+ years after a genuine PoC," with the LK-99 fiasco serving as a cautionary example of AI pattern-matching "in the dark."

The most grounded prediction is probably solid-state batteries: three to eight years for commercial scale, because "this is probably the nearest-term item on the list." The furthest out? Designer proteins and biomimetic materials—"artificial spider silk has been '5 years away' for 20 years, because the biology-to-manufacturing gap is real."

Bottom Line

Smith's strongest contribution isn't his optimism about AI breakthroughs—it's his epistemological humility. The most interesting question isn't whether AI will discover something useful; it's whether we'll be able to understand what we've found.

His vulnerability lies in the leap from "LLMs work without simple laws" to "AI science will produce principles humans can't comprehend." The first claim is demonstrably true; the second requires more argument than he's provided. But for readers willing to engage with that gap, this conversation offers something rare: a serious intellectual treatment of what AI might actually do to our understanding of science.

Related reading: Smith's referenced essay from three years ago would be essential background—readers looking for deeper treatment should seek his New Year's essay on compression. For the materials science angle, check out recent work on topological materials and their applications in quantum computing.

Deep Dives

Explore these related deep dives:

Sources

A conversation with Claude

by Noah Smith · Noahpinion · Read full article

Seems like everyone is publishing their conversations with Claude these days. Vanity Fair reporter Joe Hagan published a fake Claude-generated “interview” with Anthropic CEO Dario Amodei.1 Bernie Sanders published a video of himself talking to Claude about AI and privacy. So I thought, why don’t I publish one of my own conversations with Claude? I’m afraid this one isn’t as spicy as those others, but you might still find it fun.

This particular conversation started out as me asking Claude about potential AI discoveries in materials science. The discussion then segues into the more general question of what types of scientific research AI is best at, and what areas of research might see the biggest acceleration from AI. It turns out that I’m actually more bullish than Claude on AI’s capacity for breakthrough ideas — Claude thinks humans will retain the edge in creativity and invention, but I bet AI will get good at this very quickly.

My bet is that the constraints on AI science will be a subset of the constraints on human science. Whenever data is sparse, both AI and humans will struggle to do more than come up with conjectures (and ideas for how to gather more data). And when humans have already discovered most of what there is to know about some natural phenomenon, AI won’t be able to get much farther because there just isn’t much farther to go.

I do suspect, however, that AI is going to discover some truly groundbreaking science that humans never could have discovered on their own. I explained why in my New Year’s essay three years ago:

Basically, human science is all about compressibility. We take some natural phenomenon — say, conservation of momentum — and we boil it down to a simple formula. That formula is very easy to communicate from person to person, and it’s also very easy to use. These are what we call the “laws of nature”.

But there’s no reason why every natural principle needs to obey simple laws that can be written down in a few lines. There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless. LLMs themselves are a good example of such a pattern — we figured out how to create and apply human language without ever being ...