← Back to Library

Learned helplessness is hurting the security industry

Ross Haleliuk identifies a silent killer in the cybersecurity industry that has nothing to do with zero-day exploits or sophisticated malware: a pervasive psychological state of defeatism. While most security analysis focuses on technical gaps, Haleliuk argues that the industry's own language is actively dismantling its ability to succeed, turning professionals into passive observers of their own failures. This is not a critique of tools, but of the mindset that convinces teams they are destined to lose before a single line of code is written.

The Myth of the Unwinnable Game

Haleliuk begins by dissecting the most common mantra in the field: "Attackers only need to be right once, defenders need to be right all the time." He notes that this phrase has become a "kind of gospel," yet he challenges its fundamental logic. "The most important part is that this phrase, while it sounds catchy, is wrong," Haleliuk writes. He argues that this saying is the clearest case of learned helplessness, a psychological phenomenon where individuals stop trying because they believe their actions cannot change the outcome. By internalizing the idea that defense is a losing game, security teams inadvertently discourage the very creativity and risk-taking required to build robust systems.

Learned helplessness is hurting the security industry

The author reframes the attacker-defender dynamic, pointing out that adversaries must actually succeed multiple times to execute a full attack chain. "Every patch, every detection rule, every segmentation improvement, and every training session raises the bar for attackers and makes it harder for them to be right enough times that they can just get in unnoticed," Haleliuk explains. This perspective shifts the narrative from inevitable failure to measurable improvement. Critics might argue that the asymmetry of offense and defense remains a hard reality, but Haleliuk's point is that accepting the asymmetry as a reason to give up is a choice, not a law of physics.

The perception that one cannot control the situation essentially elicits a passive response to the harm that is occurring.

The Trap of Fatalism

Moving beyond the mechanics of defense, Haleliuk tackles the fatalistic slogan, "It's not if, it's when." He acknowledges that while this phrase is often used to encourage preparedness, it frequently crosses the line into normalizing defeat. "Over time, this mindset creates a sense of fatalism, and security becomes less about building measurable, improving programs and more about waiting for the inevitable breach," he observes. This framing is dangerous because it directly impacts business strategy. If a breach is guaranteed regardless of effort, the logical executive decision is to spend the bare minimum required for compliance rather than investing in genuine security posture.

Haleliuk warns that the industry is effectively "pushing the self-destructing narrative that cuts security budgets and makes us look optional." By telling business leaders that failure is preordained, security professionals are undermining their own value proposition. The argument here is that optimism is not just a morale booster; it is a strategic necessity for securing funding and organizational buy-in. "I think the right takeaway should be to be prepared, not that everyone is doomed," Haleliuk asserts, urging a shift from resignation to active resilience.

Redefining the Human Element

Perhaps the most contentious part of Haleliuk's analysis is his rejection of the phrase "People are the weakest link." He contends that this label is often a convenient excuse for poor system design. "Acknowledging that humans are fallible is important, but it should drive us to design systems that anticipate mistakes, not blame them after the fact," he writes. The author argues that security solutions have historically been built to protect systems while ignoring the humans who operate them, leading to friction and workarounds.

Instead of blaming users for clicking links or ignoring warnings, Haleliuk suggests that the industry must "embrace human nature and design security for it, not against it." This means recognizing that people will always seek the path of least resistance and building controls that make the secure choice the easy choice. The distinction is subtle but critical: the problem isn't human error; the problem is designing systems that rely on humans being perfect. "In practice, more often than not, [the phrase sounds] like 'People are the weakest link, so no matter what we do around security, some dumb employee is going to click a link and we're done - does it even matter what we do?'" Haleliuk notes. This defeatist interpretation absolves security architects of the responsibility to create better user experiences.

We as an industry are pushing the self-destructing narrative that cuts security budgets and makes us look optional.

The Cost of Learned Helplessness

Haleliuk concludes by connecting these linguistic habits to a broader stagnation in the field. He suggests that the industry is clinging to narratives that hold it back, creating a cycle where professionals feel "doomed to fail" and businesses wonder why they should invest at all. "The more we repeat these defeatist phrases, the more we reinforce the belief that security is unwinnable and that breaches are pre-determined," he writes. This is not merely a philosophical debate; it has tangible consequences for the security of organizations and the morale of the workforce.

The author calls for a move toward "cyber optimism," arguing that the future of the industry depends on abandoning the mental shortcuts that make everyone "connectively sadder, poorer, and less motivated to make a difference." While some might argue that a certain level of caution is necessary to avoid complacency, Haleliuk's evidence suggests that the current level of pessimism is counterproductive. The path forward requires a fundamental shift in how security teams talk about their work, moving from a culture of inevitability to one of agency.

Bottom Line

Haleliuk's strongest argument is that the language of security is not just descriptive but performative, actively shaping the outcomes it claims to predict. The piece's greatest vulnerability is its reliance on a psychological framework that may be difficult to operationalize in high-stress, resource-constrained environments where failures are indeed frequent. However, the call to replace fatalism with agency is a necessary corrective for an industry that risks becoming a self-fulfilling prophecy of failure.

Sources

Learned helplessness is hurting the security industry

by Ross Haleliuk · Venture in Security · Read full article

Every several months, I come across an article, a LinkedIn post, or a talk that gets me annoyed, seemingly for no reason. I found this interesting, so several years ago, I started collecting these statements to which my brain responds with rejection into a single Google Doc. Whenever I’d hear or see someone repeat one of these statements, I’d drop them into this doc along with my thoughts at the moment, and go about my day.

Last weekend, when I opened this document and went over a few pages of links and my chaotic notes, it finally hit me that all the things I disagree with have a single root cause, and they are all the result of the same phenomenon - learned helplessness. In this piece, I’ll discuss this phenomenon in depth, explaining what it is, why I find it frustrating, and why I think that to mature as an industry, we have to move past it.

This issue is brought to you by… Vanta.

The GRC Event of the Year: Join VantaCon Virtually

Tune in on November 19 to VantaCon, the GRC event of the year! Speakers from Vanta, Anthropic, 1Password, Sublime Security, and more will be tackling the big problems security professionals are facing today, and the opportunities new technologies and trends are uncovering for the future.

At VantaCon you’ll:

Build connections with 400+ peers in the GRC security space

Learn from a wide range of GRC community professionals sharing insights & best practices from companies big and small

Get hands on with labs and learning opportunities to sharpen your skillset and take new ideas back to your organization

Register today to watch the VantaCon livestream and participate in the virtual Q&A.

The concept of learned helplessness.

Here’s how Psychology Today, the world’s largest mental health and behavioral science destination online, explains the concept of learned helplessness: “Learned helplessness occurs when an individual continuously faces a negative, uncontrollable situation and stops trying to change their circumstances, even when they have the ability to do so. For example, a smoker may repeatedly try and fail to quit. He may grow frustrated and come to believe that nothing he does will help, and therefore, he stops trying altogether. The perception that one cannot control the situation essentially elicits a passive response to the harm that is occurring.”

This is surely many smart words in a single paragraph, so the ...