← Back to Library

There’s only one kind of tool security teams should be building with AI

Ross Haleliuk delivers a necessary reality check to the current hype cycle surrounding artificial intelligence in cybersecurity, challenging the widespread belief that generative models will soon enable every organization to build its own security tools. While the industry buzzes with the idea that AI democratizes product development, Haleliuk argues that the fundamental barriers of expertise, liability, and maintenance remain insurmountable for the vast majority of enterprises. This is a crucial intervention for busy leaders who might otherwise mistake code generation for the ability to engineer resilient, enterprise-grade defense systems.

The Illusion of Democratized Engineering

The core of Haleliuk's argument rests on the distinction between writing code and building functional, secure products. He rightly identifies that while AI accelerates syntax generation, it does not conjure the deep domain knowledge required to navigate complex enterprise environments. "Software that can withstand real-world enterprise environments isn't some random CRUD (create - read - update - delete) apps," Haleliuk writes, emphasizing that security tools demand architectural rigor and an understanding of edge cases that AI cannot simply hallucinate into existence. This observation is particularly sharp because it cuts through the noise of "vibecoding"—a term often used to describe the casual, experimental approach to building software that is dangerously ill-suited for critical infrastructure.

There’s only one kind of tool security teams should be building with AI

The author suggests that AI will not level the playing field but rather widen the gap between elite engineering organizations and everyone else. "If you're a Bay Area-style, product-driven company that already has strong security engineers building internal tooling, AI is going to amplify that big time," he notes. For companies like OpenAI or Google, where engineering is the DNA, building in-house makes sense. However, for the rest of the world, the lack of senior talent remains a hard constraint. "Companies that didn't have the engineering talent aren't going to suddenly build their own tools because the three SOC analysts they can afford to hire now have Claude," Haleliuk argues. This reframing is vital; it shifts the conversation from a technological capability question to a human capital reality.

Critics might argue that AI agents will eventually automate the architectural decisions and edge-case handling that currently require human intuition. However, Haleliuk's point about the scarcity of senior security talent suggests that the bottleneck is not the ability to write code, but the ability to validate it against sophisticated threat models.

Security vendors industrialize very limited expertise, turning scarce security talent into software.

The Hidden Costs of Ownership and Liability

Beyond the initial build, Haleliuk pivots to the often-overlooked economics of software maintenance. He contends that while AI lowers the upfront cost of development, it does nothing to reduce the total cost of ownership, which is dominated by the long tail of maintenance, updates, and integration management. "Who is going to update integrations when APIs change? Who is going to refactor the system when there's too much tech debt?" he asks, highlighting a problem that mirrors the historical challenges of Shadow IT. Just as unauthorized software proliferates in the absence of governance, internally built security tools can become unmanageable black boxes if the organization lacks the dedicated resources to maintain them.

The argument extends to the realm of liability, a factor that CISOs cannot afford to ignore. When a company builds its own tool, it assumes 100% of the risk. "Few CISOs are going to sleep well if their already under-resourced teams start vibecoding their own security tooling," Haleliuk writes, pointing out that auditors and insurance underwriters may not accept custom solutions as valid controls. This is a sobering reminder that in the eyes of regulators, "we built it ourselves" is rarely a defensible strategy for compliance. The risk of invalidating cyber insurance coverage or failing an audit is a tangible deterrent that outweighs the allure of customization.

Furthermore, the author touches on the concept of Goodhart's law, noting that when a measure becomes a target, it ceases to be a good measure. In the context of building tools, the pressure to "ship fast" with AI assistance can lead to metrics that look good on a dashboard but fail to provide actual security coverage. Haleliuk warns that internal tools lack the network effects that commercial vendors enjoy. "Internal tools don't get the benefit of having shared intelligence or even lessons learned from incidents that happen at other companies," he explains. A vendor sees thousands of attack vectors across different environments; a single enterprise only sees its own, creating a blind spot that no amount of AI prompting can fix.

The Verdict on In-House Security

Haleliuk concludes that while there is a specific niche for internal tooling, it should not be mistaken for a replacement of the broader security ecosystem. He suggests that the number of companies capable of building their own security tools will grow, but only marginally. "If I were to guess, I'd think that instead of 1-2% of companies that number will grow to 4-7% but it's not going to be 70% or even 30%," he predicts. This modest projection serves as a grounding force against the hyperbolic claims often found in industry discourse.

The piece effectively dismantles the notion that AI is a magic wand for security operations. Instead, it positions AI as a force multiplier for those who already possess the necessary expertise, while leaving the rest of the market reliant on specialized vendors. "AI makes writing code super quick, but it makes skills like systems thinking and architecture more critical than ever," Haleliuk asserts. This distinction is the most valuable takeaway for any leader evaluating their security strategy.

AI makes writing code super quick, but it makes skills like systems thinking and architecture more critical than ever.

Bottom Line

Ross Haleliuk's analysis is a masterclass in separating technological possibility from operational reality, correctly identifying that the scarcity of senior security talent and the high cost of ownership are the true barriers to entry, not the ability to generate code. The argument's greatest strength is its focus on liability and the lack of network effects in internal tools, offering a pragmatic counter-narrative to the current build-everything frenzy. Leaders should watch for how this dynamic plays out as AI agents become more autonomous; while the tools may get smarter, the need for human oversight and institutional expertise will only grow more critical.

Deep Dives

Explore these related deep dives:

  • Goodhart's law

    The author's argument that AI can write code but cannot replicate deep domain expertise highlights Goodhart's law, showing how optimizing for the metric of 'speed of development' can distort the actual goal of building secure, resilient systems.

  • Technical debt

    The piece's emphasis on the difficulty of maintaining tools in messy, heterogeneous enterprise environments serves as a practical case study for technical debt, explaining why quick AI-generated prototypes often become unmanageable liabilities rather than long-term assets.

Sources

There’s only one kind of tool security teams should be building with AI

by Ross Haleliuk · Venture in Security · Read full article

Ross Haleliuk delivers a necessary reality check to the current hype cycle surrounding artificial intelligence in cybersecurity, challenging the widespread belief that generative models will soon enable every organization to build its own security tools. While the industry buzzes with the idea that AI democratizes product development, Haleliuk argues that the fundamental barriers of expertise, liability, and maintenance remain insurmountable for the vast majority of enterprises. This is a crucial intervention for busy leaders who might otherwise mistake code generation for the ability to engineer resilient, enterprise-grade defense systems.

The Illusion of Democratized Engineering.

The core of Haleliuk's argument rests on the distinction between writing code and building functional, secure products. He rightly identifies that while AI accelerates syntax generation, it does not conjure the deep domain knowledge required to navigate complex enterprise environments. "Software that can withstand real-world enterprise environments isn't some random CRUD (create - read - update - delete) apps," Haleliuk writes, emphasizing that security tools demand architectural rigor and an understanding of edge cases that AI cannot simply hallucinate into existence. This observation is particularly sharp because it cuts through the noise of "vibecoding"—a term often used to describe the casual, experimental approach to building software that is dangerously ill-suited for critical infrastructure.

The author suggests that AI will not level the playing field but rather widen the gap between elite engineering organizations and everyone else. "If you're a Bay Area-style, product-driven company that already has strong security engineers building internal tooling, AI is going to amplify that big time," he notes. For companies like OpenAI or Google, where engineering is the DNA, building in-house makes sense. However, for the rest of the world, the lack of senior talent remains a hard constraint. "Companies that didn't have the engineering talent aren't going to suddenly build their own tools because the three SOC analysts they can afford to hire now have Claude," Haleliuk argues. This reframing is vital; it shifts the conversation from a technological capability question to a human capital reality.

Critics might argue that AI agents will eventually automate the architectural decisions and edge-case handling that currently require human intuition. However, Haleliuk's point about the scarcity of senior security talent suggests that the bottleneck is not the ability to write code, but the ability to validate it against sophisticated threat models.

Security vendors industrialize very limited expertise, turning scarce security talent into software.

The Hidden ...