← Back to Library

Not getting incentives right can kill a security initiative or a security startup

Ross Haleliuk cuts through the noise of cybersecurity hype to deliver a blunt truth: technology rarely fails because the code is bad, but because the people writing it are paid to ignore it. While the industry obsesses over artificial intelligence and zero-day exploits, Haleliuk argues that the vast majority of breaches stem from "basic and boring problems" driven by misaligned corporate rewards. This is not a technical manual; it is a structural diagnosis of why security initiatives stall, offering a rare lens on the human economics of risk that most CISOs ignore at their peril.

The Economics of Negligence

Haleliuk begins by dismantling the myth that security is a standalone discipline. He observes that "most breaches aren't caused by some novel technology like AI or blockchain, nor are they the result of mysterious, never seen before zero-days." Instead, the root causes are mundane: forgotten passwords, unrevoked contractor access, and a lack of centralized asset tracking. He attributes these failures to what his colleague Yaron Levi calls a "lack of operational discipline," but Haleliuk pushes deeper, asking the question that usually gets lost: "Why should they?" Why should an engineer prioritize secure coding if their promotion depends on shipping velocity?

Not getting incentives right can kill a security initiative or a security startup

The author's framing is sharp because it shifts the blame from individual incompetence to systemic design. He notes that "software engineers and product teams are incentivized to ship fast," making security reviews an obstacle to be avoided rather than a standard to be met. Similarly, IT teams are trapped in ticketing systems where the metric for success is closing tickets quickly, not ensuring those access grants are secure. This dynamic creates a classic principal–agent problem, where the agents (employees) optimize for their own performance metrics (speed, ticket closure) at the expense of the principal's (the company's) long-term security. Haleliuk writes, "Until secure behavior becomes a part of everyone's performance reviews, alongside execution, teamwork, and communication skills, this is not going to change."

"What gets measured, gets done. To put it more bluntly, people will do what they're incentivized to do."

Critics might argue that culture can override incentives, pointing to companies that successfully embed security without rigid KPI changes. However, Haleliuk's evidence suggests that without structural alignment, culture is just a slogan. The only executive truly incentivized to care about risk is the CISO, leaving the rest of the organization to optimize for speed.

The Industry-Wide Trap

The analysis expands beyond individual firms to the broader startup ecosystem, where the pressure to survive often necessitates ignoring security. Haleliuk examines the recent excitement around initiatives like the Secure by Design Pledge, noting that while the intent is noble, "thinking that an initiative like this is going to lead to any real consequences means not understanding how incentives work." He explains that for a startup, the primary risk is not a breach, but failure to find product-market fit. "When the company has a product and zero customers, the number one priority isn't to make the product secure; it is to get that first customer," he argues.

This perspective reframes the "shift left" movement's struggles. Haleliuk suggests that the movement failed not because developers lack skill, but because "no security champions program can make developers prioritize security over velocity when they get promoted for the latter, not the former." The historical parallel here is strikingly similar to Goodhart's law, which states that when a measure becomes a target, it ceases to be a good measure. When speed becomes the sole target, security inevitably degrades, regardless of how many pledges companies sign.

Haleliuk points out that this misalignment kills security startups as well. Founders often build products that assume engineering teams care about security as a primary value, a fatal assumption. "The larger the company, the less likely it is that IT, infrastructure, or engineering will ever pay for, or be excited to implement a product, the primary value proposition of which is security," he writes. The only successful path, he suggests, is offering security as a byproduct of a value proposition that teams already care about.

The Future of Risk

Looking forward, Haleliuk admits a sobering reality: incentives rarely evolve on their own. He describes a landscape where "visionary ideas were killed by the realities of how legal liability, insurance, and other concerns make companies behave." Whether it is sharing threat intelligence or collaborating on detection data, legal fears and liability concerns create a wall that technology alone cannot breach. He confesses that even as an optimist, he struggles to see how the industry can change without a "radically shifting" of incentives.

The author's conclusion is a call for realism over optimism. He notes that "most security initiatives that fail fail because of misaligned incentives," not because the technology is insufficient. This is a crucial distinction for investors and leaders alike. If the goal is to reduce risk, buying a new tool is a distraction; the real work lies in redefining what gets people promoted and what gets them fired.

"Not getting incentives right can kill a security initiative or a security startup."

Bottom Line

Ross Haleliuk's most powerful contribution is the insistence that security is an economic problem, not a technical one. His argument holds up because it exposes the disconnect between the CISO's mandate for risk reduction and the rest of the organization's mandate for speed and growth. The biggest vulnerability in the piece is the lack of concrete policy solutions for how to actually rewire these incentives without crippling business velocity, leaving the reader with a clear diagnosis but a foggy prescription. Watch for how the next generation of security tools attempts to embed themselves as productivity enhancers rather than security controls, as that may be the only viable path forward.

Deep Dives

Explore these related deep dives:

  • Principal–agent problem

    The article's core argument about misaligned incentives between security teams and other departments is a direct application of this foundational economics concept, which explains how conflicts arise when one party (agent) makes decisions on behalf of another (principal)

  • Goodhart's law

    The article discusses how KPIs and metrics drive behavior ('what gets measured gets done'), which directly relates to Goodhart's observation that when a measure becomes a target, it ceases to be a good measure - explaining why IT closes tickets fast rather than securely

  • Moral hazard

    The article describes situations where parties are insulated from security consequences (engineers ship fast, IT closes tickets quickly) because they don't bear the risk of breaches - a classic moral hazard scenario from economics and insurance theory

Sources

Not getting incentives right can kill a security initiative or a security startup

by Ross Haleliuk · Venture in Security · Read full article

I have been thinking about this topic for a while, and I am glad I have finally found the time to gather my thoughts into an article. I feel like it’s pretty rare to see people discuss incentives in cybersecurity (except for my friend Chris Hughes, who emphasizes this topic frequently in his blog and on LinkedIn). This is quite surprising given that everything in our industry centers around incentives. In this piece, I share some thoughts about this problem, discuss what I think are its most important aspects, and why more people should care.

This issue is brought to you by… Intruder.

As AI Enables Bad Actors, How Are 3,000+ Teams Responding?

Shadow IT, supply chains, and cloud sprawl are expanding attack surfaces - and AI is helping attackers exploit weaknesses faster. Built on insights from 3,000+ organizations, Intruder’s 2025 Exposure Management Index reveals how defenders are adapting.

High-severity vulns are up nearly 20% since 2024.

Small teams fix faster than larger ones - but the gap’s closing.

Software companies lead, fixing criticals in just 13 days.

Get the full analysis and see where defenders stand in 2025.

Incentives define how different departments prioritize security.

If you read Verizon DBIR, CrowdStrike report, or any of the other credible, regularly produced reports about the root causes of breaches, or even if you simply follow the news, you’ll notice a consistent pattern:

Most breaches aren’t caused by some novel technology like AI or blockchain, nor are they the result of mysterious, never seen before zero-days.

The vast majority of security problems are not really security problems; they are problems that originate in other types of organizations and introduce security risks.

To put it differently, the vast majority of all the breaches happen because of some basic and boring problems. Someone forgot to change the password. Someone wasn’t able to track all the assets in a centralized system. Someone decided to grant a contractor more permissions than they needed, but forgot to revoke access when the contractor left. This list can go on and on, but the fact that matters here is that most of the time, what gets companies breached is something the security team can’t fix on their own. It is what my friend Yaron Levi calls “lack of operational discipline”.

None of this is rocket science, and anyone who has worked in security for over a year gets this. ...