← Back to Library

Bits in, bits out

Erik Hoel cuts through the fever dream of artificial intelligence hype by asking a brutally simple question: if these systems were truly superintelligent, why is the quality of human writing getting worse? In an era where venture capital floods the sector and headlines scream of imminent automation, Hoel offers a rare, data-grounded reality check, arguing that Large Language Models are not a new source of surplus intellect, but merely efficient tools that amplify the mediocrity of their training data.

The Tool, Not the Oracle

Hoel begins by grounding the debate in our evolutionary history, reminding us that humans are defined not by our biology alone, but by our capacity to craft and use instruments. He invokes the philosopher Henri Bergson to suggest that "Homo faber" means "man the maker," and that our cognition itself is a tool for navigating the world. This historical framing is crucial because it sets the stage for his central thesis: if AI is a tool, its output should reflect the user's intent and the tool's efficiency, not a sudden leap in collective wisdom.

"We are not in a glut of good writing. We are in a dearth of it."

This observation strikes a chord because it contradicts the prevailing narrative of an "AI Tsunami" of brilliance. Hoel points out that while we have seen an explosion in the volume of text, the quality has stagnated or declined. He notes that if these models were a true "source of intelligence to rival humans," discovering them should be like discovering oil—a massive surplus of value. Instead, he argues, we are seeing a "blurry jpeg" of human culture, where the unique, resistant spark of the artist is smoothed away into generic, "slop" content.

Bits in, bits out

Critics might argue that Hoel is measuring the wrong metric, suggesting that AI's true value lies in democratizing creation for non-experts rather than elevating the highest tier of art. However, Hoel's data on Amazon ratings suggests that even the "democratization" effect is largely an illusion, with the average book actually getting worse in the post-LLM era.

The Illusion of the Singularity

The piece takes a sharp turn when Hoel dismantles the idea that we are on the verge of an intelligence explosion. He contrasts the current state of AI with the legendary "Move 37" moment in AlphaGo, where the system played a move so creative it stunned human experts. In writing, Hoel asserts, there has been no equivalent breakthrough.

"There has been no 'move 37' moment for writing... Not even close."

He illustrates this by testing the latest models on tasks that require genuine creativity, such as writing a children's book from scratch. The results, he claims, are "exactly what a smiling alien would write if it had never interacted with a child before." This analogy is particularly effective because it highlights the lack of lived experience in the models. They can mimic the structure of a story but lack the emotional resonance that comes from understanding concepts like unrequited sacrifice or metamorphosis, themes Hoel identifies in classics like The Giving Tree or The Very Hungry Caterpillar.

"If you want to write an actually good children's book you've got to put your own perspective into it, and LLMs are 'views from nowhere.'"

This distinction between "approximation" and "automation" is the core of Hoel's argument. He suggests that the industry is confusing the ability to generate passable text with the ability to create meaningful art. The "stochastic parrot" criticism, often dismissed by enthusiasts, is re-emerging as a valid description of how these models operate: they are predicting the next likely word, not generating new ideas.

The Data Behind the Hype

Hoel bolsters his philosophical arguments with empirical evidence, citing a study on Amazon data that tracks book ratings before and after the rise of LLMs. The findings are stark: while the top 1,000 books saw slight improvements, the average book quality dropped significantly. This suggests that the technology is primarily being used to mass-produce low-quality content, flooding the market with "slop" that dilutes the overall ecosystem.

"The actual effects of LLMs on publishing were that: (a) the average book got worse, (b) the top 1,000 books in each category improved somewhat, and (c) the top 100 books in each category didn't change in quality."

This data challenges the narrative that AI will soon automate complex fields like science and mathematics. If the technology struggles to produce a coherent children's book without heavy human guidance, Hoel argues, it is unlikely to suddenly revolutionize academic research or mathematical proof. He points out that even in coding, where companies claim to have automated production, the reality is that engineers are still essential, just working at a higher level of abstraction.

"Engineering is changing and great engineers are more important than ever."

This quote, attributed to Boris Cherny, underscores the "tool" theory: AI is a force multiplier for skilled workers, not a replacement for them. The hype, Hoel suggests, is driven by a desire to believe in a technological savior rather than a realistic assessment of the technology's current limitations.

Bottom Line

Erik Hoel's argument is a necessary antidote to the intoxicating narrative of imminent superintelligence, grounding the AI debate in the tangible reality of declining content quality. His strongest point is the distinction between efficiency and intelligence, proving that mass-producing text does not equate to creating wisdom. The argument's vulnerability lies in its reliance on current benchmarks, which may not capture the rapid, non-linear progress of future models, but for now, the evidence suggests we are drowning in a sea of average, not ascending to a peak of genius.

"We are not in a glut of good writing. We are in a dearth of it."

As the industry pivots from writing to video and science, the lesson from the publishing world should be a warning: without the human element of resistance and perspective, automation may simply mean the acceleration of mediocrity.

Deep Dives

Explore these related deep dives:

Sources

Bits in, bits out

by Erik Hoel · · Read full article

When I was ten years old I visited the ruins in Cornwall where King Arthur had been conceived and born, at least in legend. There at Tintagel Castle, surrounded by the ocean air and the jagged rocks, I separated from my mother and sister and made my way down to the beach. And on the beach of Tintagel, right near Merlin’s cave, I spotted it in the sand. A stone. But not just any stone—a stone ax head. It could have been nothing else. It was shaped just like an ax, being unnaturally thick at the head, which was a smoothed blunt blade, and with near right angles it tapered to a point at the back. There was a notched cleft down the middle, to tie it to a shaft. As a 10-year-old boy in a place already dreamy with legend, I pocketed it, for I felt the Neolithic ax had come to me specifically, as if a Lady of the Lake had tossed it ashore. I am looking at it on my desk now.

Later I learned that such Neolithic axes are not so rare—much like ancient Roman coins, they were mass-produced, and you can buy them on eBay cheaply due to finds like mine. This one from Tintagel beach is a perfect specimen, although the ocean likely washed away its provenance. But on that day, even amid all the other sandy stones, it stood out to me immediately. I knew it was a tool instinctively, the way a baby knows the nipple.

The philosopher Henri Bergson wrote that:

We should say not Homo sapiens, but Homo faber.

Homo faber means “man the maker.” For if anything defines humans, it is tool use. I know it is now standard, in our rush to dethrone humanity, to play up that other animals also sometimes use tools. But unlike other animals, tools are our evolutionary niche. We have been making stone tools for at least 3.3 million years. We co-evolved with tools. At first, we made them from wood and bone and stone; later we began to craft abstract tools too. Language is a tool. Math is a tool. All of our vaunted cognition is, in some sense, a tool for a more protean mental firmament, which probably is consciousness itself. Heidegger’s term for this aspect of our consciousness was Zuhandenheit: “readiness to hand.”

Now, we live in an age of ...