← Back to Library

To feel secure can be more important than to be secure

Ross Haleliuk challenges the cybersecurity industry's obsession with technical perfection by arguing that the feeling of security is often more critical than the reality of it. In a field drowning in complex data and zero-day threats, this piece forces a necessary pivot toward human psychology, revealing why our most sophisticated defenses often fail to align with how people actually perceive risk.

The Illusion of Alignment

Haleliuk begins by dismantling the binary view of safety. He writes, "Security is at least two things: how we feel about our security and whether we are actually secure. The two do not always align." This distinction is the piece's foundational insight. We often assume that if a system is technically robust, the user will feel safe. Haleliuk argues the opposite is frequently true: users can feel perfectly secure while in real danger, or be secure while feeling terrified. The author illustrates this with the example of air travel, noting that people remain "secure but not feel secure like someone flying on a modern airplane, despite air travel being statistically one of the safest modes of transportation in history." This framing is effective because it shifts the burden of proof from the engineer to the psychologist. It suggests that a firewall is useless if the user doesn't trust the interface.

Security is never just security; it is a negotiation between reality and perception.

The Economics of Safety

The commentary then moves to the inevitable cost of protection. Haleliuk posits that "security always comes at a cost. When we improve our safety, we almost always give something up: money, convenience, time, freedom, or capabilities." He uses the analogy of airport security to demonstrate that society often accepts massive inefficiencies—long lines, invasive screenings—simply to buy a "feeling of security." This is a crucial reframing for product leaders. Instead of asking if a feature is secure, the question becomes "are the benefits greater than the cost?" Haleliuk notes that "the most secure system is the one you've disconnected from any network and buried under the ground so far that nobody can find it," a hyperbolic truth that underscores the impossibility of absolute safety. Critics might argue that this utilitarian view risks normalizing dangerous compromises, but Haleliuk's point is that maximum security is a fantasy; feasible security is a trade-off.

To feel secure can be more important than to be secure

The Psychology of Fear

Perhaps the most compelling section addresses why humans are terrible at judging risk. Haleliuk writes, "Humans are very bad at judging risk... Our brains are wired for survival in small groups, not for navigating complex global networks." He highlights a specific bias where we "care more about spectacular risks than we do about common risks." The author points out that people worry about terrorism or rare kidnappings while ignoring the far higher statistical probability of car accidents or slipping in the shower. In the digital realm, this manifests as an obsession with "hackers in hoodies" and nation-state actors while neglecting mundane failures like weak passwords or unpatched servers. Haleliuk observes that "problems like unpatched servers, the real culprits in countless breaches... don't get as much attention as these personified, human-like actors." This is a sharp critique of the industry's marketing, which often dramatizes low-probability events to sell fear.

The Power of Models

Finally, Haleliuk introduces the concept of "models" as the frameworks we use to navigate complexity. He explains that "to understand the complexity of the world, humans rely on models - frameworks that help us make sense of risks that are too complex to understand." The danger, he argues, is that these models are not neutral. "The problem with models is that they are not neutral; they are shaped by culture, politics, and incentives." He uses the historical battle over smoking as a parallel, noting how the tobacco industry fought for decades to downplay risks until the model shifted. In cybersecurity, we see similar battles over whether to prioritize compliance, zero trust, or threat intelligence. Haleliuk concludes that "different models don't replace one another, instead, they co-exist," suggesting that the industry must learn to manage multiple, sometimes conflicting, mental maps simultaneously.

Bottom Line

Haleliuk's strongest argument is that security is fundamentally a communication problem, not just a technical one. The piece's biggest vulnerability is its reliance on the assumption that stakeholders are rational enough to accept trade-offs once the psychological biases are exposed. However, the core insight remains vital: you cannot build a secure system if you ignore the human mind that must use it.

Sources

To feel secure can be more important than to be secure

by Ross Haleliuk · Venture in Security · Read full article

A few weeks ago, after I published “Using behavioral science to build stronger defenses,” one of the readers reached out to tell me that Bruce Schneier had given a talk on a very similar topic many years ago. I was intrigued, so I went digging and eventually found the talk they were referring to. What follows is both a summary of Schneier’s ideas and my own reflections on how they resonate today.

At the heart of Bruce’s talk lies a simple but powerful insight that security is both a feeling and a reality. The two don’t always align; people can feel secure without actually being secure, or be secure while still feeling insecure. In this piece, I am looking at this disconnect more closely to resurface the ideas from Bruce’s talk, to highlight just how fascinating (and sometimes counterintuitive) human psychology can be, and to discuss how this impacts the way we should be building products.

This issue is brought to you by… Axonius.

Fragmented environments. Alert fatigue. AI hype. CTRL/ACT is where action starts.

Tired of noise and complexity? CTRL/ACT shows how asset intelligence drives smarter IT and security. Featuring Rachel Wilson, Managing Director & Chief Data Officer, Morgan Stanley — save your spot now.

Why I am discussing ideas from the past.

Before we dive in, I’d like to first explain my fascination with academia and decades-old insights. Since my blog is focused on startups and the future, once in a while, I see people getting confused about why I think that some essay from two decades ago or a video from a decade and a half is relevant today.

The reason for that is simple: while technology moves quickly, the fundamentals of most industries and we as humans change far more slowly. If you take a few steps back, it becomes pretty clear that the incentive systems, organizational dynamics, and human psychology are remarkably consistent even though the tech we rely on changes. That’s why there are plenty of ideas from a long time ago that feel as relevant today as when they were first published. Nearly two decades back, Ian Grigg described security as a market for silver bullets, and while tech has come and gone since then, and new Gartner categories replaced old ones, that framing still, in my opinion, describes how our industry works. Similarly, while a bunch of nuances have changed, for the ...