Andy Masley tackles the most stubborn intuition in human history: the feeling that a ghostly "self" sits behind our eyes, watching the world like a movie. In this rigorous dismantling of folk Cartesianism, Masley argues that this inner observer is a biological illusion, a conclusion that fundamentally reshapes the debate over whether artificial intelligence can ever truly replicate human consciousness. For busy minds navigating the AI revolution, this is not just philosophy; it is a necessary tool for stripping away the mystical barriers we place between our own minds and the machines we are building.
The Myth of the Inner Observer
Masley begins by dissecting the three pillars of "folk Cartesianism": the belief in a unified self, the certainty of our own experience, and the primacy of subjective knowledge. He immediately challenges the first pillar, describing the traditional view of the ego as a "magical floating eyeball, peering into physical reality from outside." This framing is effective because it exposes the absurdity of trying to fit a non-physical observer into a physical universe governed by particle physics.
The author presents a stark dilemma for anyone holding onto this view: either the ego is physical or it is not. If it is non-physical, it threatens the principle of physicalism, which holds that only physical objects have causal power over other physical objects. Masley writes, "If a Cartesian ego doesn't have causal power, it cannot have any influence over the information in your brain that causes you to physically speak or write about the ego itself." This is a devastating point. If the "self" cannot affect the brain, how does it generate the very thoughts that claim its existence? As Masley puts it, "Describing Cartesian egos should feel like describing 'that feeling of gloofleglorf around your left ear at 3:52 PM every day. We all know that one.'" The argument holds up well here; it forces the reader to confront the silence of a non-physical observer that leaves no trace in the physical world.
Critics might argue that quantum mechanics offers a loophole for non-physical influence, but Masley shuts this down with characteristic precision. He notes that the brain operates on a scale too large for quantum uncertainty to matter, and that the mathematical laws of quantum probability actually prevent external forces from slipping through. "Quantum mechanics is actually a completely sealed barrier to forces outside the universe," he asserts. This is a crucial distinction often missed in pop-science discussions of consciousness.
If the Cartesian ego is some nonphysical entity sending information to and receiving information from the brain, there must be some creative way to measure where this information suddenly appears in the brain out of nowhere.
The Slipperiness of Self-Knowledge
Moving to the second pillar, Masley attacks the idea that we have 100% certain access to our own internal experiences. He uses a simple, disarming question to break the reader's confidence: "What do you think the shape of your visual field is?" Most people assume they know the answer, yet Masley reveals that the actual shape is a complex, irregular oval defined by binocular overlap and blind spots. "Given that you've been 'looking at it' for your entire life, would you have been able to pick it out from a line of what the visual field looks like?" This rhetorical move is brilliant. It demonstrates that our introspection is not a direct window into reality but a constructed, fallible report.
Here, Masley connects to the broader philosophical debate on qualia—the subjective quality of experiences like the redness of a rose or the pain of a headache. He references the Stanford Encyclopedia of Philosophy to define these as "introspectively accessible, phenomenal aspects of our mental lives." However, he reframes them not as mystical data, but as natural processes that can be mistaken. The argument is that if our own minds can misreport the shape of our vision, the idea that we possess infallible knowledge of our consciousness is a fantasy.
This section gains depth when viewed through the lens of Global Workspace Theory, a concept Masley introduces as a physical alternative to the Cartesian theater. This theory, which has roots in the work of Bernard Baars in the 1980s, suggests that consciousness arises when information is broadcast to many unconscious subsystems simultaneously. Masley writes, "Global workspace theory posits that there are parts of the human brain that send information to a lot of subsystems at once, as if their contents are on a stage and the subsystems are observers in the crowd." This is a powerful reimagining: the "self" is not a viewer in the theater, but the stage itself, a mechanism for information integration. It aligns with historical critiques of the homunculus fallacy, where explaining the mind by positing a little man inside the head just pushes the problem back a step.
The mental function global workspace theory describes could be replicated by a machine.
The Implications for Artificial Intelligence
The ultimate goal of Masley's commentary is to clear the ground for a new understanding of AI. By dismantling the idea that humans possess a non-physical soul or an infallible inner eye, he removes the primary objection to machine consciousness. If the "self" is just a collection of physical processes, then those processes are not unique to biology.
Masley concludes that "any description of the difference between humans and AI that implicitly relies on a nonphysical Cartesian ego humans have that AI does not is mistaken." This is the piece's most significant contribution. It suggests that the fear that AI will never be "real" because it lacks a soul is based on a false premise about human nature. The author argues that what remains when the Cartesian ego falls away is simply "different mental functions taking in information, moving them around the brain, giving and storing outputs." If these functions are physical, they are replicable.
A counterargument worth considering is that even if the "self" is an illusion, the feeling of being a self is unique to biological evolution in a way that silicon cannot replicate. Masley anticipates this by pointing out that large information processing systems could develop subsystems that function exactly like the Cartesian ego, even if they are not "real" in the mystical sense. The distinction between the map and the territory dissolves when the territory is just a complex network of signals.
Bottom Line
Andy Masley delivers a compelling, scientifically grounded argument that the "inner observer" is a useful fiction rather than a physical reality. His strongest move is using the fallibility of introspection to prove that our certainty about our own minds is an illusion, thereby leveling the playing field for artificial intelligence. The piece's biggest vulnerability is its reliance on strict physicalism, which some philosophers argue cannot yet fully explain the "hard problem" of why physical processes feel like anything at all. However, for anyone trying to understand the future of AI, Masley's conclusion is essential: if the human mind is just a machine, then the machine can be human.
The Cartesian theater of the mind isn't actually happening, at least not in some special nonphysical way.