Noah Smith's conversation with Claude isn't about AI being dangerous or safe—it's something far more interesting: it's about what we'd actually discover if we could see beyond our own cognitive limits. His central claim is striking: "human science is all about compressibility," and there are patterns in nature too complex for humans to hold in their minds, but useful nonetheless.
Smith opens with a confession that's almost endearing. He's less interested in the dramatic AI confrontations that Vanity Fair or Bernie Sanders published—he wanted the substantive discussion. The piece reads like "a late-night discussion in the hall of a freshman dorm," he admits, and he's okay with that comparison. For many readers, those late-night conversations are some of the most transformative they'll ever have.
The Real Stakes
What makes this conversation compelling isn't the AI safety debate—it's the epistemological claim at its heart. Smith writes: "There could be more complex patterns in nature — too complex for a human to hold in their minds, or even notice in the first place, but stable and useful nonetheless."
This is the article's boldest argument, and it deserves attention because it's not science fiction. It's a serious claim about how science actually works.
Smith uses LLMs as his primary example—"we figured out how to create and apply human language without ever being able to write down simple 'laws' of how it worked." The implication is that if we missed something this fundamental about language, what else have we missed? What other complex-but-useful patterns exist in materials science and biology that we're simply not smart enough to notice?
There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless.
The historical parallel is worth noting. In 1986, researchers discovered YBCO (yttrium barium copper oxide), and superconductors have been struggling with commercial applications ever since—the gap between "it works in a lab" and "you can make wire out of it" remains brutal. Smith's point isn't just about discovery; it's about comprehension.
The Honest Skepticism
But here's what makes the conversation interesting: Claude pushes back. Near the end, Smith quotes the AI's response on whether these discoveries could be communicated to humans: "Probably not." And then he adds the observation that "just like a dog will never be able to understand quantum mechanics, humans may never be able to understand some of the scientific principles that AI discovers and harnesses."
This is where the piece becomes genuinely thought-provoking. Smith isn't just saying AI will find things—he's saying we won't understand what it finds. The question shifts from "what will AI discover?" to "can we even comprehend what we've discovered?"
The counterargument here is worth considering. Critics might note that this framing risks sounding like technological mysticism—assuming AI will find something we can't understand without specifying what that something would be or how we'd verify it's useful. Smith's essay three years ago attempted to answer this, but the gap between "complex pattern exists" and "we can't communicate it" remains wide.
The Materials Science Preview
What makes this conversation genuinely valuable is the concrete discussion of what's actually coming in materials science. Claude's list isn't speculative hand-waving—it includes specific technologies like room-temperature superconductors (which would be "civilization-altering"), solid-state electrolytes for batteries, direct air capture sorbents, and topological materials.
The timeline estimates are refreshingly honest. For solid-state electrolytes, Smith notes they're "essentially already here" in terms of proof of concept—Toyota, Samsung SDI, and QuantumScape are targeting late-2020s production. But for room-temperature superconductors? "15-30+ years after a genuine PoC," with the LK-99 fiasco serving as a cautionary example of AI pattern-matching "in the dark."
The most grounded prediction is probably solid-state batteries: three to eight years for commercial scale, because "this is probably the nearest-term item on the list." The furthest out? Designer proteins and biomimetic materials—"artificial spider silk has been '5 years away' for 20 years, because the biology-to-manufacturing gap is real."
Bottom Line
Smith's strongest contribution isn't his optimism about AI breakthroughs—it's his epistemological humility. The most interesting question isn't whether AI will discover something useful; it's whether we'll be able to understand what we've found.
His vulnerability lies in the leap from "LLMs work without simple laws" to "AI science will produce principles humans can't comprehend." The first claim is demonstrably true; the second requires more argument than he's provided. But for readers willing to engage with that gap, this conversation offers something rare: a serious intellectual treatment of what AI might actually do to our understanding of science.
Related reading: Smith's referenced essay from three years ago would be essential background—readers looking for deeper treatment should seek his New Year's essay on compression. For the materials science angle, check out recent work on topological materials and their applications in quantum computing.