Ross Haleliuk identifies a silent killer in the cybersecurity industry that has nothing to do with zero-day exploits or sophisticated malware: a pervasive psychological state of defeatism. While most security analysis focuses on technical gaps, Haleliuk argues that the industry's own language is actively dismantling its ability to succeed, turning professionals into passive observers of their own failures. This is not a critique of tools, but of the mindset that convinces teams they are destined to lose before a single line of code is written.
The Myth of the Unwinnable Game
Haleliuk begins by dissecting the most common mantra in the field: "Attackers only need to be right once, defenders need to be right all the time." He notes that this phrase has become a "kind of gospel," yet he challenges its fundamental logic. "The most important part is that this phrase, while it sounds catchy, is wrong," Haleliuk writes. He argues that this saying is the clearest case of learned helplessness, a psychological phenomenon where individuals stop trying because they believe their actions cannot change the outcome. By internalizing the idea that defense is a losing game, security teams inadvertently discourage the very creativity and risk-taking required to build robust systems.
The author reframes the attacker-defender dynamic, pointing out that adversaries must actually succeed multiple times to execute a full attack chain. "Every patch, every detection rule, every segmentation improvement, and every training session raises the bar for attackers and makes it harder for them to be right enough times that they can just get in unnoticed," Haleliuk explains. This perspective shifts the narrative from inevitable failure to measurable improvement. Critics might argue that the asymmetry of offense and defense remains a hard reality, but Haleliuk's point is that accepting the asymmetry as a reason to give up is a choice, not a law of physics.
The perception that one cannot control the situation essentially elicits a passive response to the harm that is occurring.
The Trap of Fatalism
Moving beyond the mechanics of defense, Haleliuk tackles the fatalistic slogan, "It's not if, it's when." He acknowledges that while this phrase is often used to encourage preparedness, it frequently crosses the line into normalizing defeat. "Over time, this mindset creates a sense of fatalism, and security becomes less about building measurable, improving programs and more about waiting for the inevitable breach," he observes. This framing is dangerous because it directly impacts business strategy. If a breach is guaranteed regardless of effort, the logical executive decision is to spend the bare minimum required for compliance rather than investing in genuine security posture.
Haleliuk warns that the industry is effectively "pushing the self-destructing narrative that cuts security budgets and makes us look optional." By telling business leaders that failure is preordained, security professionals are undermining their own value proposition. The argument here is that optimism is not just a morale booster; it is a strategic necessity for securing funding and organizational buy-in. "I think the right takeaway should be to be prepared, not that everyone is doomed," Haleliuk asserts, urging a shift from resignation to active resilience.
Redefining the Human Element
Perhaps the most contentious part of Haleliuk's analysis is his rejection of the phrase "People are the weakest link." He contends that this label is often a convenient excuse for poor system design. "Acknowledging that humans are fallible is important, but it should drive us to design systems that anticipate mistakes, not blame them after the fact," he writes. The author argues that security solutions have historically been built to protect systems while ignoring the humans who operate them, leading to friction and workarounds.
Instead of blaming users for clicking links or ignoring warnings, Haleliuk suggests that the industry must "embrace human nature and design security for it, not against it." This means recognizing that people will always seek the path of least resistance and building controls that make the secure choice the easy choice. The distinction is subtle but critical: the problem isn't human error; the problem is designing systems that rely on humans being perfect. "In practice, more often than not, [the phrase sounds] like 'People are the weakest link, so no matter what we do around security, some dumb employee is going to click a link and we're done - does it even matter what we do?'" Haleliuk notes. This defeatist interpretation absolves security architects of the responsibility to create better user experiences.
We as an industry are pushing the self-destructing narrative that cuts security budgets and makes us look optional.
The Cost of Learned Helplessness
Haleliuk concludes by connecting these linguistic habits to a broader stagnation in the field. He suggests that the industry is clinging to narratives that hold it back, creating a cycle where professionals feel "doomed to fail" and businesses wonder why they should invest at all. "The more we repeat these defeatist phrases, the more we reinforce the belief that security is unwinnable and that breaches are pre-determined," he writes. This is not merely a philosophical debate; it has tangible consequences for the security of organizations and the morale of the workforce.
The author calls for a move toward "cyber optimism," arguing that the future of the industry depends on abandoning the mental shortcuts that make everyone "connectively sadder, poorer, and less motivated to make a difference." While some might argue that a certain level of caution is necessary to avoid complacency, Haleliuk's evidence suggests that the current level of pessimism is counterproductive. The path forward requires a fundamental shift in how security teams talk about their work, moving from a culture of inevitability to one of agency.
Bottom Line
Haleliuk's strongest argument is that the language of security is not just descriptive but performative, actively shaping the outcomes it claims to predict. The piece's greatest vulnerability is its reliance on a psychological framework that may be difficult to operationalize in high-stress, resource-constrained environments where failures are indeed frequent. However, the call to replace fatalism with agency is a necessary corrective for an industry that risks becoming a self-fulfilling prophecy of failure.