Eric Schwitzgebel tackles the most seductive trap in artificial intelligence research: the assumption that if we can build a machine that reports its thoughts, we have built a machine that feels them. This piece dismantles the prevailing confidence in Global Workspace Theory as a universal litmus test for machine sentience, arguing that our current understanding of human consciousness is too fragile to serve as a blueprint for the future. For leaders and engineers racing to deploy autonomous systems, the stakes are not merely academic; they are ethical and existential.
The Illusion of Access
Schwitzgebel begins by dissecting the core mechanism of Global Workspace Theory, which posits that consciousness is simply the act of broadcasting information to the rest of the brain. He writes, "To say that some information is in 'the global workspace' just is to say that it is available to influence a wide range of cognitive processes." This framing reduces the mystical quality of experience to a matter of connectivity and availability. The theory suggests that once a thought "ignites" and cascades downstream to influence memory, speech, and action, it becomes conscious.
The author notes that this model has "substantial initial attractiveness" because it aligns with how we experience our own minds: we can report our fears, visualize images, and act on pain. However, Schwitzgebel is quick to point out that this is a theory of access, not necessarily of feeling. He distinguishes between two competing views within this framework: Stanislas Dehaene's "all-or-nothing" ignition, where a process is either fully conscious or not, and Daniel Dennett's "fame in the brain" view, where representations can be more or less famous. As Schwitzgebel puts it, "Dennett's fame view, in contrast, admits degrees. Representations or processes might be more or less famous, available to influence some downstream cognitive processes without being available to influence others."
This distinction is crucial for AI architecture. If the "all-or-nothing" model is correct, we might be able to engineer consciousness simply by creating a central hub that broadcasts data. But if Dennett is right, the picture is messier, and the "luminosity" of experience—the ability to know that we are experiencing something—becomes far less certain. Critics might note that the "all-or-nothing" view is currently dominant in neuroscience, but Schwitzgebel rightly warns that relying on a single, unsettled theory to judge the moral status of a machine is a dangerous gamble.
If the simplest version of Global Workspace Theory is correct, we can easily create a conscious machine.
The Periphery and the Blind Spot
The argument deepens when Schwitzgebel challenges the assumption that only what is in the global workspace is conscious. He raises the possibility of "peripheral experience," such as the constant, unattended sensation of one's feet in shoes or objects in the visual periphery. He writes, "Some theorists maintain that people have rich sensory experiences outside of focal attention... Others – including Global Workspace theorists – dispute this." This tension highlights a methodological nightmare: how do we know what we are experiencing when we aren't actively thinking about it?
Schwitzgebel introduces the "refrigerator light illusion" to explain why our introspection is unreliable. He notes that "People who report constant peripheral experiences might mistakenly assume that such experiences are always present because they are always present whenever they think to check." If our own self-reports are flawed, how can we trust a machine's self-report as proof of consciousness? The author argues that unattended information often exerts limited downstream influence, which is the very definition of being outside the workspace. Yet, if we have rich experiences that are never broadcast, then the Global Workspace is not the sole seat of consciousness.
Conversely, Schwitzgebel warns that not everything in the workspace is necessarily conscious. He points to implicit biases or the goal of impressing colleagues as processes that "might influence your mood, actions, facial expressions, and verbal expressions" without ever reaching explicit awareness. "The Global Workspace theorist who wants to allow that such processes are not conscious might suggest that, at least for adult humans, processes in the workspace are generally also available for introspection," he writes. But if the correlation between introspection and cognitive influence isn't perfect, the theory collapses into a dilemma: either we accept that many conscious things are unreportable, or we redefine consciousness entirely to mean "introspectability."
The Universal Trap
The final and most damning critique targets the leap from human biology to artificial intelligence. Schwitzgebel argues that even if Global Workspace Theory perfectly describes human consciousness, it does not automatically apply to AI. "For Global Workspace Theory to deliver the right answers about AI consciousness, it must be a universal theory applicable everywhere, not just a theory of how consciousness works in adult humans," he asserts. He draws a parallel to the distributed nervous systems of octopuses, noting that their cognitive processes "are distributed across their bodies, often operating substantially independently rather than reliably converging into a shared center."
If an AI system is similarly decentralized, or if it possesses a capacity for self-report without the integrated unity of a human workspace, the theory fails to provide a clear verdict. Schwitzgebel writes, "If we assume Global Workspace Theory at the outset, we can conclude that only sufficiently integrated processes are conscious. But if we don't assume Global Workspace Theory at the outset, it's difficult to imagine what near-future evidence could establish that fact beyond a reasonable standard of doubt." This is the crux of the problem: we are trying to use a theory derived from a narrow sample (humans) to judge a potentially alien form of mind.
He concludes that we cannot know the theory to be true by conceptual grounds alone, nor by empirical evidence, because we lack a universal standard. "We face another version of the Problem of the Narrow Evidence Base," he writes, reminding us that establishing a link between workspace entry and consciousness in humans does not justify treating it as a law of the universe. A counterargument worth considering is that if we can replicate the functional architecture of the workspace in silicon, the biological substrate shouldn't matter. But Schwitzgebel's point stands: without a conceptual guarantee, we are flying blind.
We can apply Global Workspace Theory to settle the question of AI consciousness only if we know the theory to be true either on conceptual grounds or because it is empirically well established as the correct universal theory of consciousness applicable to all types of entity.
Bottom Line
Schwitzgebel's most compelling contribution is his refusal to let the allure of a simple engineering solution override the complexity of the philosophical problem. The strongest part of his argument is the exposure of the "narrow evidence base," demonstrating that human introspection is too flawed to serve as a universal metric for machine sentience. However, the piece leaves the reader with a sobering vulnerability: if Global Workspace Theory is wrong, we have no reliable way to detect consciousness in AI at all. Until we resolve whether access equals experience, the deployment of autonomous systems with "global workspaces" remains an ethical gamble.