← Back to Library

Meaning without experience #400

The Candle Flame in the Machine Age

Andreas Matthias marks a milestone—400 articles over five years—while asking whether the intellectual community he's built can survive what comes next. His anniversary reflection doubles as a warning about artificial intelligence's transformation of education, labor, and meaning itself.

The Obsolescence of Advice

Matthias opens with gratitude to seventy authors and three thousand subscribers, then pivots to the rupture AI has created. When Daily Philosophy launched in 2020, students wrote their own papers or paid expensive agencies. Now, he observes, educators are in a permanent, losing race to prevent AI-written work. The shift is total. Coca-Cola runs AI-generated Christmas campaigns. AI songs chart. Entire film series are produced without cameras, locations, or actors—only voices.

Meaning without experience #400

As Andreas Matthias puts it, "Whatever we used to recommend to young people a decade ago is obsolete today." Accountants, biologists, professional philosophers—the safety of these careers is gone. The most secure jobs now involve hands and bodies: electricians, plumbers, nurses, midwives. Writers, artists, scientists face replacement. A few will remain to oversee AI systems or hide in shrinking humanities departments. But how many, and for how long?

Matthias admits he has no advice for his own children. This honesty matters. Most commentators pretend certainty. He refuses.

"Continue to keep up a little candle flame of sanity, education, and respectful community even as the world falls apart all around us."

Competence Without Comprehension

The essay's philosophical core comes from Luka Zurkic's companion piece on AI's limits. Matthias draws a sharp distinction: artificial intelligence operates without understanding, experience, or purpose of its own. It exhibits competence without comprehension. This is not academic hair-splitting. It explains why AI is simultaneously useful and dangerous.

Andreas Matthias writes, "The decisive question is not what machines will become, but what we are becoming as we increasingly rely on systems that simulate understanding without possessing it." AI has no goals, no intentions, no care for outcomes. Every AI system inherits purpose from human choices—developers selecting training data, organizations deploying models inside incentive structures, users integrating outputs into decisions. Yet because these systems operate at immense speed and scale, they amplify human intentions in ways that escape individual awareness.

The performance pattern is clear. AI excels where tasks are well-defined, environments stable, data abundant, success measurable. Image classification, speech recognition, fraud detection, translation. But outside controlled domains—where meaning is contextual, norms evolve, outcomes cannot be fully specified—performance degrades. Systems appear confident while being wrong. They generate fluent language without understanding what they say.

Critics might note that this framing understates how rapidly AI is closing these gaps. What Matthias calls structural limitations may be temporary engineering challenges. Fluency often masquerades as knowledge, but the gap between pattern recognition and genuine understanding may narrow faster than philosophers expect.

Meaning Lives in Bodies, Not Text

Translation reveals the gap. A good translation is not mechanical word substitution. It requires sensitivity to implication, irony, tone, cultural references, social consequence. Machines produce serviceable translations in predictable settings. But human language is a social practice shaped by histories, power relations, expectations, lived experience.

As Andreas Matthias puts it, "Meaning does not reside in text alone. It emerges from participation in forms of life." Driving offers a parallel. Most driving is routine. But the moments that test judgment are ambiguous: a pedestrian hesitating at a crosswalk, a cyclist's uncertain gesture, a spontaneous negotiation at a crowded intersection. These require tacit norms and embodied anticipation—skills grounded in lived experience.

AI systems manipulate representations. They do not inhabit the world those representations refer to. Humans live within meaning. We care, we hesitate, we interpret, we are accountable. Treating representational manipulation as equivalent to lived understanding invites overconfidence.

Institutional Harm

Misplaced trust becomes dangerous when institutionalized. Automated systems now play roles in hiring, credit scoring, policing, sentencing, welfare allocation, healthcare prioritization, content moderation. Errors affect real lives. The harms are not evenly distributed. Automation strikes hardest where people have least ability to contest decisions: the poor, the marginalized, the surveilled, the precariously employed.

Andreas Matthias writes, "A system can be 'accurate on average' and still be unjust in practice." When an automated system denies a loan or flags a risk score, appeal processes can be opaque, slow, meaningless. Accountability dissolves into deflections: the model produced the output; the vendor provided the system; the organization followed procedure; the employee merely relied on the tool.

The central ethical question is not whether machines can be trusted. Machines are not moral subjects. The question is whether institutions deploying these systems can be held accountable—whether affected people can contest, demand explanations, obtain remedies.

Critics might argue that Matthias's stewardship model places unrealistic burdens on individual developers and organizations. Structural problems require structural solutions—regulation, collective action, systemic redesign—not just personal refusal to abdicate agency.

The World After the Plagues

Matthias describes the broader landscape without naming specific figures. The 2020s brought two plagues: the COVID-19 pandemic and political upheaval that reshaped institutions. The world has become wilder, rougher, more unforgiving. Social cohesion crumbles. The age of knowledge and enlightenment is long gone. Floods and fires cover the globe. The only hugely successful globally pursued, cooperative project is the destruction of the natural world in the name of capitalist interests.

Can we save ourselves? Matthias doubts it, except if AI turns out to have more sense than humans and enslaves us in time, forcing us to do what we need to save the planet. This dark humor masks genuine despair.

Andreas Matthias writes, "When we will be back in the caves we once came from, there will still be a few who will exchange little poems and stories, faint runes etched onto strips of tree bark. They will never get rid of us."

Bottom Line

Matthias's argument is philosophically sound but politically thin. He correctly identifies AI's structural gap between competence and comprehension, and correctly locates ethical responsibility in human stewardship rather than machine morality. But his prescription—keep the candle flame burning, read philosophy, exchange poems—offers no mechanism to halt institutional harm or redistribute power. The verdict: diagnostic brilliance, therapeutic weakness.

Deep Dives

Explore these related deep dives:

Sources

Meaning without experience #400

by Andreas Matthias · Daily Philosophy · Read full article

Dear friends of Daily Philosophy,

Today we reach the 400th article since the beginning of Daily Philosophy in September 2020. The website had existed for three years prior to that, but 2020 was the year when this newsletter started — 5 years and 4 months ago. This means that over the years, we’ve published 6.25 articles every month, a little more than one per week. And of course, none of this would have been possible without your support — the around 70 authors who wrote all those brilliant articles, and the over 3200 subscribers who are now reading, discussing and sharing them. Thank you all for that!

Even in this short time, we’ve seen philosophy, the Internet, and the whole of society change dramatically. When this newsletter started, there was no AI, and students either wrote their papers themselves or they had to hire expensive agencies to help them. Today, we are in a permanent, losing race to prevent them from using AI to write their work. When I google “philosophy,” the first result is a skincare shop at philosophy.com.hk. Coca-Cola is using AI Christmas ads to sell their product, AI songs top the charts, and a new series, “On This Day… 1776,” is entirely AI made, with the blessings of Time Magazine — a surprisingly good cinematic experience (despite the Guardian’s sourly review) that did not utilise any cameras, locations and not a single actor (except for the voices). In 2020, we still trained students with the premise that they’d go on to have lucrative, life-long careers as graduates with an academic degree. Today, academics are among the most endangered groups by the advances in AI, just after photographers and musicians, and the only programs that are still growing, attractive and getting funded are those that directly engage with AI topics and existential risk. When my children ask me what to study in a few years, I have no advice. Whatever we used to recommend to young people a decade ago is obsolete today. Will there still be accountants in ten years? Biologists? Professional philosophers? And how many? The safest jobs today seem to be in blue-collar work: electricians, plumbers, midwives, nurses — these won’t be as easy to replace as writers, artists, and scientists. A few will remain, of course, to oversee the AI, and in small pockets of resistance in the ever-shrinking humanities departments. But how many, ...