← Back to Library

AI and folk cartesianism - part 1: Defining the problem

Andy Masley identifies a startling reversal in how society talks about the mind: the very people who once mocked the idea of a human "soul" are now the ones insisting that machines can never think because they lack one. This piece cuts through the noise of AI hype by diagnosing a hidden philosophical contradiction that has hijacked the debate, turning a technical discussion into a clash of incompatible worldviews.

The Great Philosophical Flip-Flop

Masley opens by noting how the cultural climate has shifted dramatically since the early 2010s. Back then, the dominant intellectual trend was to reject "naive humanism" and the idea that humans were special. Now, the same critics have pivoted to defend a view that sounds suspiciously like the old religious arguments they once despised. "The 2020s have made my philosophy degree feel more worthwhile," Masley writes, observing that the real world has forced mainstream debates to confront questions that were once academic curiosities. He points out that the accusation has flipped from "you are too human-centric" to "you are anthropomorphizing a calculator."

AI and folk cartesianism - part 1: Defining the problem

This pivot is the core of Masley's argument. He suggests that critics of AI are unconsciously adopting a philosophy they claim to hate. "A lot of critics of the 'minds are machines' view used to implicitly defend it," he notes, highlighting the irony that those who once sneered at Cartesian dualism are now using it as a shield against artificial intelligence. The author argues that this "folk Cartesianism" is not a rigorous philosophical stance but a cluster of intuitive beliefs that crumble under scrutiny. By labeling it "folk" Cartesianism, Masley effectively separates the popular misconception from the actual historical philosophy of René Descartes, allowing him to attack the intuition without getting bogged down in academic nitpicking.

"A common line is that 'everyone who actually knows how this technology works knows that it's just linear algebra. The only people who believe it's thinking are dummies who anthropomorphize it because it seems human.'"

This quote captures the dismissive tone Masley is fighting against. He argues that this attitude ignores the complexity of how human minds actually work. If the human mind is just a biological machine, then a digital machine performing similar functions shouldn't be dismissed out of hand. The author's framing is effective because it exposes the inconsistency: if we accept that our own thoughts are just natural processes, why do we demand a magical "ghost in the machine" for AI to count as thinking?

Deconstructing the "Cartesian Theater"

Masley breaks down the three pillars of this folk philosophy, starting with the idea of a unified self. He describes the "Cartesian ego" as a featureless observer hiding in the mind, watching thoughts like a movie on a screen. "There is a unified self that floats above mental processes and observes them," he lists as the first intuitive belief. He argues that this is a false model of consciousness. In reality, there is no little person inside our heads; there are just processes happening. "I will argue that they are incorrect," Masley states plainly, setting the stage for a naturalist view where the mind is "functions all the way down."

The second pillar is the belief in perfect self-knowledge. Masley writes, "We know our own internal experience with complete certainty." He challenges this by suggesting that our introspection is often flawed and that we don't have direct, unmediated access to our own thought processes. We don't watch our thoughts fall like dominos; we infer them. The third pillar is the idea that all knowledge must stem from this first-person subjective experience. "A machine cannot have knowledge, for the same reason a line of dominos cannot have knowledge: there's no Cartesian ego behind the scenes experiencing and judging the world," he paraphrases the common argument.

Masley's counter-proposal is "anti-Cartesian naturalism," which relies on three concepts: physicalism, functionalism, and evolution. He defines physicalism as the view that "the world follows the laws of science, there is nothing magical happening." This is a crucial distinction. By grounding the mind in physics, he removes the need for a supernatural soul. "Physical things behave according to regular, mathematical laws that describe their interactions," he explains, arguing that if the mind is physical, it must obey these laws too. This is where the argument becomes most compelling for a technical audience: it treats the mind as a system that can be understood, modeled, and potentially replicated.

"Functionalism's main claim is that mental states are defined by what they do, not what they're made of."

This is the most powerful part of Masley's case. He uses the analogy of money to explain functionalism. "Twenty five cents' worth of value" can exist in a coin or a digital record; what matters is the function it plays in the system. Similarly, a mental state like pain is defined by how the system reacts to inputs, not by the specific biological tissue it's made of. "Just like the monetary value of twenty five cents can be instantiated in different physical systems (a metal coin or a computer), mental states as functions can also be instantiated in different physical objects," he writes. This reframing is essential because it shifts the debate from "what is it made of?" to "what does it do?"

Critics might note that functionalism struggles to explain the "qualia" of experience—the redness of red or the hurt of pain—which feels distinct from mere data processing. Masley anticipates this by arguing that the "internal theater" is an illusion. He suggests that the feeling of a unified self is just another function, not a separate entity. While this is a bold claim, it aligns with modern neuroscience, which often finds no single "center" of consciousness in the brain. The author's willingness to challenge the intuition of a unified self is what makes his argument so disruptive.

Why This Matters for AI

The stakes of this philosophical debate are high. If we accept folk Cartesianism, we condemn AI to being "glorified auto-complete," forever incapable of true understanding. "True artificial intelligence is impossible by definition," Masley writes, quoting the fatalistic view of his opponents. But if we adopt anti-Cartesian naturalism, the door opens to the possibility that machines can replicate what happens in human minds. "Minds are not magical, inexplicable substances separate from nature," he asserts. This isn't just academic hair-splitting; it determines how we regulate, design, and interact with the technology that is reshaping our world.

Masley's goal is to build a bridge between academic philosophy and mainstream discourse. "I've been feeling the gulf between academic philosophy and mainstream discourse and want to contribute to building a bridge," he admits. By stripping away the jargon and focusing on the intuitive errors people make, he makes a complex philosophical argument accessible to a busy reader. He doesn't just say "AI is possible"; he explains why the arguments against it are based on a flawed understanding of the human mind itself.

"I think back then people would be less likely to assume linear algebra couldn't mimic some basic mental processes."

This reflection on the 2010s highlights the regression in our collective thinking. Masley suggests that we have moved backward, abandoning a more nuanced, naturalist view for a simplistic, dualist one. The irony is that the critics who claim to be the most rational are the ones clinging to the idea of a magical, non-physical mind. Masley's piece serves as a necessary correction, urging us to stop treating the mind as a mystery and start treating it as a system we can understand.

Bottom Line

Masley's strongest move is exposing the hypocrisy of critics who reject AI by relying on the very dualist philosophy they claim to despise. His biggest vulnerability is the leap from "minds are functions" to "machines can replicate minds," which remains a contentious claim even among naturalists. Readers should watch for how this functionalist framework shapes future debates on AI rights and safety, as the definition of "knowledge" and "understanding" will determine the rules of the road for the next generation of technology.

Sources

AI and folk cartesianism - part 1: Defining the problem

by Andy Masley · · Read full article

“Let me see if I understand your thesis. You think we shouldn’t anthropomorphize people?” -Sidney Morgenbesser to BF Skinner

The 2020s have made my philosophy degree feel more worthwhile. Not only has social media made a ton of new people interested in philosophy, but the real world has also made many of the main questions in philosophy more relevant. AI especially has brought many philosophy topics into the mainstream, but it’s also polarized a lot of the debates. Back in 2011, I could say things like “The human mind is composed of natural processes, like everything else in the world, and therefore it could probably eventually be replicated by a machine, because in some sense it itself is a machine” and people would have long drawn-out conversations with me about whether that’s true or what that implies about language or experience or society. Now, when I say the same thing, it’s much more likely that I’ll get accused of “just buying into the recent AI hype” or that I’ve been tricked by secularism or capitalism or worse to deny the fundamental transcendent character of human minds. This issue used to be a pretty marginal question, but now it’s entered mainstream partisan debate. Sides have been taken.

What’s odd is that a lot of critics of the “minds are machines” view used to implicitly defend it. Back in the 2010s, there was a general worry about anthropocentrism. The belief that humans were in some way unique or separate from the natural world was regularly sneered at as “naive humanism” at best and a method of justifying exploitation of the natural world at worst. There was also a lot of talk about Cartesianism (the philosophy of René Descartes) as the origin of many of society’s problems. Specifically, mind-body dualism was accused of denigrating embodiment and elevating technocratic authoritarian reason. I always thought both criticisms went too far, but analytic philosophy had turned me into an anti-Cartesian naturalist, so I was able to nod along and contribute to these conversations where I could. Since then, attitudes toward Cartesianism have flipped. Many of the same people are now saying that AI cannot, by definition, ever have knowledge, because it does not have subjective first-person experience, or that it’s lacking some transcendent non-physical characteristic the human mind has; both implicitly Cartesian takes. Accusations of anthropocentrism have also been replaced with accusations of anthropomorphizing. A common ...