Richard Hanania makes a provocative claim that defies the current cultural panic: the rise of artificial intelligence will not dumb down society, but rather make it smarter across the political spectrum. While critics fear a flood of misinformation, Hanania argues that for both the conspiracy-prone and the diligent scholar, large language models offer a superior alternative to the flawed information ecosystems we currently inhabit. This is a crucial pivot for anyone tired of the doom-scrolling narrative, suggesting that the very tools blamed for polarization might be the only things capable of fixing it.
The Great Equalizer of Reason
Hanania's central thesis rests on a simple, almost radical premise: AI is a net positive for truth-seeking, regardless of the user's starting intelligence. He writes, "For people who tend to think illogically and get their facts wrong, AI is better than what they have traditionally used to develop their worldviews." This reframing is powerful because it stops treating AI as a magic wand for the elite and starts viewing it as a corrective mechanism for the masses. The argument suggests that the alternative to AI isn't human intuition; it's the chaotic, algorithm-driven echo chambers of social media.
As Hanania puts it, "This is unlike the internet, which I have argued has been good for some people and bad for others." He posits that while the early web created a divide between the information-rich and information-poor, AI has the potential to flatten that curve. The core of his reasoning is that the bottleneck for good writing and sound reasoning has shifted from access to information to the ability to process it. By outsourcing the mechanical drudgery of drafting and fact-checking, even those with limited writing skills can produce work that is accurate and clear. He notes, "Maybe in a decade, intellectuals are just basically polishing AI content, or editing it to fit their views." This prediction challenges the romanticized notion of the solitary genius, suggesting instead a future where the value lies in curation and verification rather than raw composition.
"If a piece of journalism, scholarship, or commentary is accurate, well-reasoned, and illuminating, the division of labor between human and machine is largely irrelevant."
Critics might argue that this view ignores the loss of human nuance or the risk of homogenized thought, but Hanania counters that the alternative—sloppy, unverified human writing—is far worse. He draws a parallel to the invention of the word processor, noting that "People's penmanship almost certainly got worse after the invention of word processors," yet we do not mourn the loss of cursive as a barrier to entry for ideas. The skill that atrophies is often the one that was never the point of the exercise anyway.
The Ethics of Outsourcing
The piece tackles the moral panic surrounding AI use with a pragmatic, almost utilitarian approach. Hanania asks us to imagine a brilliant thinker whose work is entirely generated by AI, yet remains factually impeccable and logically sound. He argues, "The only argument I see you can make against him is that he was not fully honest with his audience, but this raises the question of why we should care in the first place." This is a bold challenge to the current obsession with disclosure. Hanania suggests that if the output is high quality, the process is secondary. He compares the stigma against AI to the unfair prohibition of steroids in sports, but points out that "LLMs are available to everyone, and they don't cause long-term health damage, so this is not the same thing."
He extends this logic to the economic fears of the writing profession, dismissing the idea that AI will destroy jobs as "classic lump of labor fallacy." Instead, he envisions a future where AI enables new forms of journalism that were previously economically unviable. "In local journalism, we are going to have to choose between AI reporters and not having much news coverage at all," he writes. This is a stark reality check for a media landscape where local news deserts are growing. The alternative to AI-assisted reporting isn't a team of human journalists; it's silence.
Hanania also addresses the fear that AI will erode critical thinking. He finds this concern "silly," comparing it to the early backlash against search engines: "Oh, you can just Google something? Doesn't that rot your brain, when before you would have had to learn the Dewey Decimal System?" He acknowledges that the internet made some people dumber, but insists it also "made the best quality work smarter." The lesson here is that technology amplifies intent; for those seeking truth, AI is a force multiplier, not a crutch. He admits that "skills often atrophy when they're no longer necessary, and that's fine," provided the new skills—like determining truth in AI output—are exercised.
The Real Risks Are Human, Not Mechanical
Perhaps the most insightful part of Hanania's commentary is his distinction between the tool and the user's integrity. He uses the analogy of plastic surgery to describe the current media frenzy: "Current AI scandals remind me of discussions over botched plastic surgery. Once in a while, you'll see someone whose face has been disfigured, and people will use it to advise against getting any work done." He argues that we ignore the millions of successful, natural-looking enhancements because the failures are more sensational.
He cites the case of a UK writer who included fake quotes in a book, attributing the error not to the AI, but to the author's carelessness. "The problem here though isn't AI, but rather that he is apparently a careless researcher who didn't know enough about the tool he was using to realize that you need to check its references," Hanania writes. This shifts the blame from the technology to the human operator, a crucial distinction. The real danger isn't that AI will lie; it's that humans will stop verifying the truth. He concedes that there are narrow domains, like memoirs, where undisclosed AI use is dishonest, but for analytical writing on policy or history, the standard should be accuracy, not the method of production.
"We already judge work by its quality rather than whether someone used reading glasses, spell-check, or online databases; AI should be treated no differently."
Hanania's own practice reinforces his argument. He reveals that he writes op-eds in an hour and sees no need to use AI for drafting because his bottleneck isn't writing speed. However, he recognizes that for others, "AI for writing is no different from glasses for those who can't see well: a technological fix to a natural shortcoming." This analogy is particularly effective in dismantling the elitist view that writing must be a purely manual, unassisted struggle to be valid.
Bottom Line
Hanania's argument is strongest in its refusal to romanticize the past; he correctly identifies that the pre-AI era was already rife with misinformation and that AI offers a path to higher collective intelligence. His biggest vulnerability is the assumption that human verification will remain robust in an era of high-volume AI output, a challenge that history suggests is difficult to maintain. The reader should watch for how institutions adapt their fact-checking protocols, as the tool is ready, but the human guardrails are still being built.
Ultimately, the piece succeeds in shifting the debate from "Is AI cheating?" to "Is the work true?" By framing AI as a tool for accessibility and accuracy rather than a threat to authenticity, Hanania offers a surprisingly optimistic roadmap for the future of ideas. As he concludes, "The sensible response is not to police its use, but to demand higher standards of accuracy and argument, regardless of how the words were produced."