Meaning Without Minds
Kenny Easwaran's walkthrough of Brian Skyrms's "Signals" (2008) raises a deceptively simple question: can meaningful communication arise without any intelligence at all? Skyrms, a philosopher of science at UC Irvine and one of only two living philosophers elected to the National Academy of Sciences, argues that it can. The implications extend far beyond philosophy departments. If signaling systems can emerge from nothing more than differential reinforcement, then the gap between bacterial quorum sensing and human language may be one of degree, not kind.
The Evolutionary Game Theory Framework
Skyrms's paper departs from classical game theory in a crucial way. Standard game theory assumes perfectly rational agents reasoning about each other's reasoning, a useful fiction for modeling chess grandmasters or corporate strategy but plainly absurd when applied to bacteria, vervet monkeys, or even most human social behavior. Evolutionary game theory flips this assumption entirely. As Easwaran explains, instead of perfectly rational agents, Skyrms posits "incredibly simple agents interacting in ways that don't even need any understanding and don't require any thoughts about what each other is doing or even about the environment."
This is a philosophically radical move. The entire edifice of meaning, reference, and communication is traditionally built on intentionality, on minds that mean something by what they say. Skyrms is asking whether you can get the architecture of meaning without any of the mental furniture. The answer, at least in constrained mathematical models, is yes.
From Lewis to Learning
The paper builds on David Lewis's 1969 framework of signaling games, where a sender observes a state of the world, transmits a signal, and a receiver takes an action based on that signal. Skyrms's key contribution is showing that reinforcement learning, the simplest possible form of behavioral adaptation, reliably produces optimal signaling systems. Easwaran summarizes the core result:
"It has recently been proved in this case as well that learning converges to a signaling system with probability one."
This is a striking mathematical result. Not "sometimes converges," not "converges under favorable conditions," but converges with probability one. Agents that do nothing more than increase the frequency of actions that worked and decrease those that did not will, given enough time, arrive at a perfect signaling system. No reasoning about the other agent's strategy is required. No concept of meaning is needed. The system bootstraps semantics from pure mechanics.
Nature's Signaling Systems
The biological examples give the mathematics teeth. Vervet monkeys produce distinct alarm calls for eagles, leopards, and snakes, each triggering a different evasive behavior. Bacteria engage in quorum sensing, waiting until sufficient numbers are present before launching attacks on a host. These are not metaphors for communication; they are communication, complete with senders, receivers, signals, and payoff-relevant actions.
Skyrms traces the philosophical lineage back to Democritus and through Adam Smith, who speculated that two people raised without language would "naturally begin to form that language by which they would endeavor to make their mutual wants intelligible to each other." The ancients intuited what Skyrms formalizes: signaling does not require a designer, divine or otherwise.
The Information Bottleneck Problem
The most technically interesting section of Easwaran's summary concerns mismatches between the number of states, signals, and actions. What happens when there are more signals than states? Do synonyms persist, or do redundant signals fall out of use? What about the reverse case, where an information bottleneck forces compression?
Skyrms's computer simulations yield a consistent and somewhat surprising answer: the sender always becomes deterministic, partitioning states cleanly, while the receiver may randomize. As Easwaran notes:
"Simulations always deliver efficient equilibrium and it turns out they're always of the first kind where the sender acts deterministically and the receiver randomizes, never the second kind where the receiver acts deterministically and the sender randomizes."
This asymmetry between sender and receiver strategies is not obvious from the mathematics alone and deserves more attention than it typically receives. It suggests that in natural signaling systems, the pressure for clarity falls more heavily on the encoder than the decoder, an observation that resonates with linguistics, where speakers bear more communicative burden than listeners in resolving ambiguity.
Networks and Eavesdroppers
Skyrms extends the framework beyond simple two-player games into signaling networks: chains of intermediaries, multiple senders feeding a single receiver, eavesdroppers from other species exploiting signals they did not help create. The example of deer grazing near meerkats, benefiting from their predator alarms without contributing to the signaling system, is particularly elegant. It captures a dynamic familiar from human institutions as well: the free rider who benefits from shared information infrastructure without bearing its costs.
The network extensions also raise the question of translation. In a signaling chain, an intermediary need not forward the same signal she received. She might function as a translator between two signaling conventions. That this can emerge from reinforcement learning alone, without any explicit concept of translation, is remarkable.
Counterpoints Worth Considering
For all its elegance, the Skyrms framework has notable limitations that Easwaran's summary only gestures at. The models assume common interest: sender and receiver both benefit from successful communication. But much of natural and human signaling involves conflict. Deceptive signaling, from firefly mimicry to political propaganda, requires a richer framework than pure coordination games. Skyrms acknowledges this in a footnote about eavesdroppers whose interests conflict with the signalers, but the core results depend heavily on aligned incentives.
There is also a significant gap between showing that simple systems converge to optimal signaling and explaining human language. The jump from "two signals, two states, two actions" to recursive syntax, metaphor, negation, and counterfactual reasoning is enormous. Skyrms is careful to frame his results as explaining "the origins of language" rather than language itself, but even that claim requires substantial bridging arguments that the paper leaves for future work. As Easwaran acknowledges in his closing remarks, "it remains to be seen how can this build to anything as complex as human language."
Ruth Millikan and others in the teleosemantics tradition have pursued similar questions about the biological origins of meaning, but with greater attention to the compositional structure that distinguishes language from simpler signaling. The gap between Skyrmsian signals and Chomskyan syntax remains largely uncharted.
Bottom Line
Skyrms's "Signals" makes a compelling case that meaningful communication can arise from mindless processes, and Easwaran's accessible walkthrough makes the argument available to a general audience. The mathematical results are genuinely surprising: reinforcement learning converges to optimal signaling with probability one, even in networks with information bottlenecks and multiple interacting agents. The philosophical payoff is a deflationary account of meaning that grounds semantics in dynamics rather than intentionality. Whether this foundation can support the full weight of human language remains the open question, but as a demonstration that meaning does not require minds, the argument is difficult to refute.