← Back to Library

AI doesn’t make it much easier to build security startups

In an era where every pitch deck claims artificial intelligence will shatter the old rules of venture capital, Ross Haleliuk offers a sobering counter-narrative that cuts through the hype: for cybersecurity startups, AI is a tool for amplification, not a magic wand for compensation. While the industry obsesses over "vibe coding" and rapid prototyping, Haleliuk argues that the fundamental friction of selling to enterprises—trust, complexity, and long sales cycles—remains stubbornly intact. This is a necessary reality check for founders who believe they can bypass the hard work of building deep expertise with a few well-timed prompts.

The Illusion of Speed

Haleliuk begins by separating the value AI brings to the customer from the value it brings to the builder. He acknowledges that for security teams, AI is already a game-changer for mundane tasks. "Well over 90% (and some people would even say 95-97%) of security teams' day-to-day is not some advanced incident response or dealing with nation-states," he writes. Instead, the bulk of the work involves "updating reports and dashboards for leadership" and "reconciling data across tools and systems." Here, the technology delivers genuine relief, automating the boring stuff so humans can focus on high-value threats.

AI doesn’t make it much easier to build security startups

However, the argument shifts sharply when Haleliuk turns to the startup itself. He challenges the prevailing wisdom that AI allows founders to validate demand faster or skip hiring senior talent. "AI prototyping tools can help gather feedback, learn about the problem, and even plan what capabilities should be prioritized first based on user feedback," he concedes. But he immediately pivots to the hard truth: "The one thing prototypes do a poor job at is validating that someone is going to pay real money."

This distinction is critical. In the technology adoption life cycle, the chasm between early adopters and the early majority is bridged not by a cool demo, but by proven reliability. Haleliuk suggests that while a prototype might look good, it cannot simulate the rigorous due diligence of a CISO. A counterargument worth considering is that in other sectors, a working prototype is the product, and speed to market creates a moat. Yet, in security, the cost of failure is catastrophic, making the "good enough" standard of AI-generated code a liability rather than an asset.

AI is a great amplifier, but it's not a compensator.

The Sales Cycle Reality

Perhaps the most compelling part of Haleliuk's analysis is his focus on the economics of the sale. He cites Andrew Peterson of Aviso Ventures to drive home a point that many technical founders ignore: "AI is changing how fast you can build products and features but it isn't changing how slow sales cycles are in security." The math is brutal. Even if a feature that once took nine months to build now takes six, the overall time to revenue remains dominated by the sales process.

Haleliuk notes that if a feature takes a year to sell, cutting development time by 30% only shifts the timeline from 1 year and 9 months to 1 year and 6 months. "In reality, it can be even longer because security teams afraid of risks introduced by AI-generated code are likely to extend their evaluation periods," he observes. This is a nuanced take that many overlook; the very technology meant to accelerate adoption may actually slow it down by introducing new vectors of skepticism.

The argument holds up well against the backdrop of enterprise software history. Just as the shift to cloud computing didn't instantly shorten sales cycles for complex infrastructure, the introduction of AI agents won't magically dissolve the need for trust. Haleliuk correctly identifies that the price of security products is driven by sales and marketing costs, not development costs. "Until we see sales cycles improve for security, we're gonna be stuck with slow adoption curves for the time being," he concludes.

The Talent Trap

The piece also dismantles the myth that AI allows startups to replace senior engineers with hungry juniors. Haleliuk argues that while tools like Claude Code are powerful, they do not substitute for judgment. "People who are hungry to learn and grow and do new things, but who don't have solid experience, will, over time, outperform those who have experience but are much less motivated to learn and grow," he writes, noting that this dynamic existed before AI and will persist.

He warns that companies trying to cut costs by hiring inexperienced staff to generate code without oversight will face a different kind of crisis. "Companies that hire a bunch of engineers and ask them to use Claude to generate code without establishing real guardrails to make sure the quality of that code is solid will drown in technical debt before they close their first paying customer." This is a stark reminder that in a field defined by precision, the "average of human knowledge" that AI produces is often insufficient for the edge cases that cause breaches.

Furthermore, Haleliuk points out that AI is actually making the engineering challenge harder, not easier. Citing Mrinal Wadhwa, he notes that while generating code is simpler, building reliable, stochastic agent-based systems requires "much more complex architectures." The shift from deterministic web apps to probabilistic agents demands a higher level of architectural rigor, not less. Critics might argue that AI lowers the barrier to entry for coding, but Haleliuk's point is that the barrier to entry for building secure, scalable systems has arguably risen.

The Depth of Defense

Finally, Haleliuk addresses the fear that AI will render commercial security products obsolete through "vibe coding." He argues that enterprises do not buy software because they cannot build it; they buy it because they need ownership, maintenance, and accountability. "Every large enterprise already has a few internal tools whose creators have left (or even passed away), leaving behind fragile systems no one is brave enough to touch," he writes. This historical context of legacy technical debt explains why the market for managed security services remains robust.

He emphasizes that security requires depth, not just breadth. "To identify risks in complex systems, security products have to be five inches wide and 10 feet deep, and that depth is something that comes from human expertise, research, and clear focus, not from telling Claude to write some 'cloud detection logic'." This metaphor effectively captures the essence of the industry: surface-level automation is easy, but deep, contextual understanding is the real moat.

Bottom Line

Ross Haleliuk's argument is a vital corrective to the current frenzy, grounding the conversation in the unglamorous realities of enterprise sales and the irreplaceable value of deep expertise. The piece's greatest strength is its refusal to conflate the speed of code generation with the speed of business growth, a distinction that will save many founders from costly mistakes. However, the argument may understate the potential for AI to eventually disrupt the sales process itself by creating self-service models that bypass traditional CISO gatekeepers. For now, though, the fundamentals of trust and complexity remain the true gatekeepers of the market.

To identify risks in complex systems, security products have to be five inches wide and 10 feet deep, and that depth is something that comes from human expertise, research, and clear focus, not from telling Claude to write some 'cloud detection logic'.

The strongest part of this argument is the clear-eyed assessment that AI amplifies existing capabilities but cannot compensate for a lack of market fit or deep domain knowledge. Its biggest vulnerability is the assumption that the long sales cycle is immutable; while likely true for the next few years, a shift in how enterprises consume security could eventually upend this dynamic. Readers should watch for how the industry adapts to the "stochastic" nature of AI agents, as this will likely be the next major battleground for security startups.

Deep Dives

Explore these related deep dives:

  • Technology adoption life cycle

    The article discusses slow adoption curves in security and how AI doesn't accelerate enterprise sales cycles. Understanding the technology adoption life cycle (innovators, early adopters, early majority, etc.) provides crucial context for why B2B security products face inherently slow market penetration regardless of development speed improvements.

  • Enterprise software

    The article contrasts consumer software development with enterprise security sales, arguing that faster feature shipping doesn't translate to faster growth due to procurement complexity. Understanding the unique characteristics of enterprise software sales cycles, evaluation processes, and buying committees illuminates why security startups face these specific challenges.

  • Proof of concept

    The article argues that AI prototyping tools help create demos but don't validate real demand because 'real validation in B2B only really comes when someone is writing a check.' Understanding the formal concept of proof of concept versus prototype versus minimum viable product clarifies the distinction the author is making about demand validation.

Sources

AI doesn’t make it much easier to build security startups

by Ross Haleliuk · Venture in Security · Read full article

There are many discussions about how AI is changing the way the cybersecurity industry operates, and I am certainly the last person to argue with this thought. At the same time, I have developed the perspective that for startups, it doesn’t change the game as much as many assume it does. Before I lose you completely, let me explain.

For this conversation to make sense, I think we need to separate two lines of thought: what AI enables for customers, and what AI solves for startups. These are two very different conversations, and while I want to focus the article on the latter, it won’t fully make sense if I don’t briefly address the former.

This issue is brought to you by… Harmonic Security.

Early Access Open: MCP Gateway with Intelligent Data Controls

Agentic AI is moving fast and most teams have no visibility into what’s actually happening.

Harmonic’s MCP Gateway changes that.

It’s a lightweight, developer-friendly gateway that gives security teams visibility into MCP usage and the ability to set real controls, blocking risky clients or data flows before something slips through.

We’re opening early access to a limited number of forward-leaning security teams. Request early access for your team here:

For customers, AI is transforming how security is done.

Over the past year, it has become clear to me that AI is already transforming how security is done. Now, this is not because LLMs are perfect at detection, or that AI has no gaps (they aren’t, and it does). A much more important reason why I am bullish on the opportunities this wave of AI unlocks is simple. Well over 90% (and some people would even say 95-97%) of security teams’ day-to-day is not some advanced incident response or dealing with nation-states. Most of the security teams’ work has nothing to do with chasing advanced adversaries. Much more than that, it’s boring, mundane stuff like:

Updating reports and dashboards for leadership

Collecting screenshots and evidence for audits

Responding to repetitive access and compliance requests

Reconciling data across tools and systems

Investigating low-priority alerts that never amount to much

Documenting findings and closing out endless tickets

I previously wrote a dedicated deep dive about this if you are interested in reading more: Most of the security teams’ work has nothing to do with chasing advanced adversaries.

The main point here is that all this manual stuff is exactly the ...