Ross Haleliuk cuts through the noise of cybersecurity hype to deliver a blunt truth: technology rarely fails because the code is bad, but because the people writing it are paid to ignore it. While the industry obsesses over artificial intelligence and zero-day exploits, Haleliuk argues that the vast majority of breaches stem from "basic and boring problems" driven by misaligned corporate rewards. This is not a technical manual; it is a structural diagnosis of why security initiatives stall, offering a rare lens on the human economics of risk that most CISOs ignore at their peril.
The Economics of Negligence
Haleliuk begins by dismantling the myth that security is a standalone discipline. He observes that "most breaches aren't caused by some novel technology like AI or blockchain, nor are they the result of mysterious, never seen before zero-days." Instead, the root causes are mundane: forgotten passwords, unrevoked contractor access, and a lack of centralized asset tracking. He attributes these failures to what his colleague Yaron Levi calls a "lack of operational discipline," but Haleliuk pushes deeper, asking the question that usually gets lost: "Why should they?" Why should an engineer prioritize secure coding if their promotion depends on shipping velocity?
The author's framing is sharp because it shifts the blame from individual incompetence to systemic design. He notes that "software engineers and product teams are incentivized to ship fast," making security reviews an obstacle to be avoided rather than a standard to be met. Similarly, IT teams are trapped in ticketing systems where the metric for success is closing tickets quickly, not ensuring those access grants are secure. This dynamic creates a classic principal–agent problem, where the agents (employees) optimize for their own performance metrics (speed, ticket closure) at the expense of the principal's (the company's) long-term security. Haleliuk writes, "Until secure behavior becomes a part of everyone's performance reviews, alongside execution, teamwork, and communication skills, this is not going to change."
"What gets measured, gets done. To put it more bluntly, people will do what they're incentivized to do."
Critics might argue that culture can override incentives, pointing to companies that successfully embed security without rigid KPI changes. However, Haleliuk's evidence suggests that without structural alignment, culture is just a slogan. The only executive truly incentivized to care about risk is the CISO, leaving the rest of the organization to optimize for speed.
The Industry-Wide Trap
The analysis expands beyond individual firms to the broader startup ecosystem, where the pressure to survive often necessitates ignoring security. Haleliuk examines the recent excitement around initiatives like the Secure by Design Pledge, noting that while the intent is noble, "thinking that an initiative like this is going to lead to any real consequences means not understanding how incentives work." He explains that for a startup, the primary risk is not a breach, but failure to find product-market fit. "When the company has a product and zero customers, the number one priority isn't to make the product secure; it is to get that first customer," he argues.
This perspective reframes the "shift left" movement's struggles. Haleliuk suggests that the movement failed not because developers lack skill, but because "no security champions program can make developers prioritize security over velocity when they get promoted for the latter, not the former." The historical parallel here is strikingly similar to Goodhart's law, which states that when a measure becomes a target, it ceases to be a good measure. When speed becomes the sole target, security inevitably degrades, regardless of how many pledges companies sign.
Haleliuk points out that this misalignment kills security startups as well. Founders often build products that assume engineering teams care about security as a primary value, a fatal assumption. "The larger the company, the less likely it is that IT, infrastructure, or engineering will ever pay for, or be excited to implement a product, the primary value proposition of which is security," he writes. The only successful path, he suggests, is offering security as a byproduct of a value proposition that teams already care about.
The Future of Risk
Looking forward, Haleliuk admits a sobering reality: incentives rarely evolve on their own. He describes a landscape where "visionary ideas were killed by the realities of how legal liability, insurance, and other concerns make companies behave." Whether it is sharing threat intelligence or collaborating on detection data, legal fears and liability concerns create a wall that technology alone cannot breach. He confesses that even as an optimist, he struggles to see how the industry can change without a "radically shifting" of incentives.
The author's conclusion is a call for realism over optimism. He notes that "most security initiatives that fail fail because of misaligned incentives," not because the technology is insufficient. This is a crucial distinction for investors and leaders alike. If the goal is to reduce risk, buying a new tool is a distraction; the real work lies in redefining what gets people promoted and what gets them fired.
"Not getting incentives right can kill a security initiative or a security startup."
Bottom Line
Ross Haleliuk's most powerful contribution is the insistence that security is an economic problem, not a technical one. His argument holds up because it exposes the disconnect between the CISO's mandate for risk reduction and the rest of the organization's mandate for speed and growth. The biggest vulnerability in the piece is the lack of concrete policy solutions for how to actually rewire these incentives without crippling business velocity, leaving the reader with a clear diagnosis but a foggy prescription. Watch for how the next generation of security tools attempts to embed themselves as productivity enhancers rather than security controls, as that may be the only viable path forward.