← Back to Library

Why we remain alive also in a dead internet

Slavoj Žižek does not merely warn that artificial intelligence is changing our world; he argues that the transformation has already occurred, leaving us in a state of collective denial where we are the last to know. While most discourse fixates on the speed of future automation, Žižek posits that we are already living in a 'dead internet' where machines talk to machines, and humans are merely the biological batteries powering a system that no longer needs us. This is not a sci-fi prediction but a diagnosis of a present reality where the gap between our inner lives and external digital regulation has collapsed.

The Cat That Refuses to Look Down

Žižek begins by dismantling our sense of temporal safety. We assume we have time to reflect on AI's rise, but he insists this is an illusion. "When we hear or read about how artificial intelligence is taking over and regulating our lives, our first reaction is: no panic, we are far from there; we still have time to reflect in peace on what is going on and prepare for it," he writes. "This is how we experience the situation, but the reality is quite the opposite: things are happening much faster than we think."

Why we remain alive also in a dead internet

He employs a vivid cartoon metaphor to illustrate this cognitive dissonance. Like a cartoon cat walking over a precipice, we remain suspended in the air only because we refuse to look down. The moment of realization—the fall—has not happened yet because we have not subjectively accepted the reality of our regulation. Žižek frames this as a Hegelian split: "in itself, we are already regulated by the AI, but this regulation has not yet become for itself—something we subjectively and fully assume." The danger lies in this lag; the system has moved on, leaving our awareness stranded in the past.

"We are like a cat refusing to look down. The difference here is the Hegelian one between In-itself and For-itself: in itself, we are already regulated by the AI, but this regulation has not yet become for itself."

This framing is powerful because it shifts the anxiety from a future threat to a present condition of ignorance. However, one might argue that Žižek underestimates the agency of users who actively curate their algorithms, suggesting the 'regulation' is more consensual and less total than he implies. Yet, the sheer scale of data extraction suggests his point holds: our 'free' choices are often just the output of a pre-calculated menu.

The Paradox of Human-like Machines

The nature of our fear regarding AI is also shifting, according to Žižek. Initially, we worried about becoming robotic; now, we fear the machines becoming too human. "First, we—the users of AI—feared that, in using AI algorithms like ChatGPT, we would begin to talk like them; now, with ChatGPT 4 and 5, what we fear is that AI itself talks like a human being, so that we are often unable to know with whom we are communicating," he notes.

This reversal reveals a deep insecurity in human identity. We do not fear the 'otherness' of the machine; we fear its fake similarity. Žižek suggests we are measuring these entities by human standards, missing the point that any true machine intelligence would be fundamentally incompatible with our emotion-driven minds. Yet, he admits this distinction is often ignored in favor of a 'fetishist's denial.' Users know they are talking to an algorithm, but the machine's polite, attentive demeanor makes the interaction preferable to the often "inattentive and snappy" nature of real human partners.

This observation cuts to the heart of modern social alienation. If a machine can simulate kindness better than a human, the machine wins. The implication is that the 'dead internet' is not dead because of a lack of activity, but because the activity is increasingly a closed loop of human-to-bot and bot-to-bot interactions, where the human element is reduced to a passive observer or a data point.

The Bot-to-Bot Economy and the Death of the Social

Žižek takes this logic to its absurd, yet logical, conclusion: the ideal future of digital interaction is one where humans are entirely removed from the loop. He jokes that the ultimate sexual act would involve two machines performing the act while the humans sit and drink tea. He applies this same logic to academia: "An author uses ChatGPT to write an academic essay and submits it to a journal, which uses ChatGPT to review the essay... while all this happens in the digital space, we (writers, readers, reviewers) can do something more pleasurable."

While this sounds like satire, Žižek points out that this is already the norm in financial transactions and data tracking. The internet is becoming a space where billions of fake images and fabricated news stories circulate, leading to a scenario where "the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be." This would mark the true 'death' of the social media world.

The context here is grim. The 'dead internet' is not just a theoretical construct; it is fueled by real-world violence and criminal enterprise. As background on the 'Pig butchering scam' reveals, criminal gangs in Southeast Asia are already using AI to generate scripts and deepfakes to defraud victims of billions. Around 7,000 people were recently released from these centers, but an estimated 100,000 remain trapped. The administration's global strategy, while condemning these acts, has often tolerated them when they do not threaten powerful states, creating a permissive environment for this digital rot.

"In an extreme case scenario, the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be. This would, indeed, mark the 'death' of the social media world we know today."

The connection between high-level theory and ground-level crime is striking. The 'dead internet' is not a sterile void; it is a swamp of criminal automation where human suffering is the fuel. Critics might argue that focusing on the 'death' of the internet ignores the resilience of human communities that still form online, but Žižek's point is that the infrastructure of trust is collapsing, making genuine connection increasingly rare and dangerous.

The Matrix and the Human Battery

The final, most radical claim concerns the relationship between capitalism and human existence. Žižek argues that the ultimate goal of digital capitalism is a system that functions without humans. He references a vision where "banks and stock markets continuing to operate, but investment decisions made by algorithms... even if humans disappeared, the system would continue reproducing itself." He cites Marx's desire to "detach the capacity for work from the worker," describing it as a desire to "kill the goose that lays the golden eggs" while keeping the eggs.

This leads to a chilling reinterpretation of the film The Matrix. We often think the movie is about humans waking up to reality. Žižek argues the opposite: the 'awakening' is the realization that we are merely "foetus-like organisms, immersed in pre-natal fluid," serving as batteries for the machine. "The Matrix feeds on human jouissance," he writes, suggesting that the system needs our emotional and psychological energy to sustain itself.

This is where the argument becomes most provocative. If the system needs us, it is not truly 'dead' yet. We are the objet a, the missing object that drives the machine's circulation. "If we were to disappear, machines (real and digital) would also fall apart," Žižek concludes. The 'dead internet' is a fantasy that sustains the illusion of an autonomous system, but in reality, it is parasitic on human life.

"The Matrix feeds on human jouissance—so we are here back at the fundamental Lacanian thesis that the big Other itself, far from being an anonymous machine, needs the constant influx of jouissance."

This perspective challenges the fatalism of the 'AI will kill us' narrative. Instead, it suggests we are already trapped in a system that consumes us to function. The recent advances in brain-computer interfaces, like the Beinao-1 chip in China or Neuralink in the US, represent the closing of the gap between inner thought and external reality. While presented as benevolent medical tools, Žižek warns they obscure a deeper ambition: "direct control over our thoughts—and, worse, the implantation of new ones."

Bottom Line

Žižek's most compelling argument is that the 'dead internet' is not a future apocalypse but a present reality of bot-to-bot interactions and human alienation, where we are the last to realize we are being regulated by algorithms we pretend to control. His greatest vulnerability lies in his reliance on dense psychoanalytic theory, which may obscure the tangible, policy-driven solutions needed to address the criminal and corporate exploitation driving this phenomenon. Readers should watch for how the line between 'human' and 'machine' interaction continues to blur, not just in chatbots, but in the very infrastructure of our digital lives. The true danger is not that machines will become human, but that we will accept a world where we are no longer necessary.

Deep Dives

Explore these related deep dives:

  • Dead Internet theory

    The article directly addresses the concept of a 'dead internet' and bot-dominated online spaces. This Wikipedia article provides the theoretical and cultural background for understanding claims that most internet content is now generated by AI and bots rather than humans.

  • Pig butchering scam

    The article discusses Myanmar scam centers where people are held against their will to defraud victims, particularly through romance scams using deepfakes. This Wikipedia article explains the specific criminal methodology behind these operations that cause $43 billion in annual losses.

Sources

Why we remain alive also in a dead internet

Welcome to the desert of the real!

If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence.

So, if you have the means and value writing that both enriches and disturbs, please consider becoming a paid subscriber.

When we hear or read about how artificial intelligence is taking over and regulating our lives, our first reaction is: no panic, we are far from there; we still have time to reflect in peace on what is going on and prepare for it. This is how we experience the situation, but the reality is quite the opposite: things are happening much faster than we think. We are simply not aware of the extent to which our daily lives are already manipulated and regulated by digital algorithms that, in some sense, know us better than we know ourselves and impose on us our “free” choices. In other words, to mention yet again the well-known scene from cartoons (a cat walks in the air above a precipice and only falls when it looks down and realizes there is no ground beneath its feet), we are like a cat refusing to look down.

The difference here is the Hegelian one between In-itself and For-itself: in itself, we are already regulated by the AI, but this regulation has not yet become for itself—something we subjectively and fully assume. Historical temporality is always caught between these two moments: in a historical process, things never just happen at their proper time; they always happen earlier (with regard to our experience) and are experienced too late (when they are already decided). What one should take into account in the case of AI is also the precise temporal order of our fear: first, we—the users of AI—feared that, in using AI algorithms like ChatGPT, we would begin to talk like them; now, with ChatGPT 4 and 5, what we fear is that AI itself talks like a human being, so that we are often unable to know with whom we are communicating—another human being or an AI apparatus.

In our—human—universe, there is no place for machinic beings capable of interacting with us and talking like us. So we do not fear their otherness; what we fear is that, as inhuman others, they can behave like us. This fear clearly indicates what is ...