Ross Haleliuk delivers a sobering reality check for the cybersecurity sector, arguing that the industry's fundamental nature makes it structurally incapable of matching the hyper-accelerated growth curves now defining the AI startup ecosystem. While the broader market chases "Supernova" valuations achieved in under two years, Haleliuk contends that security will remain a "Cloud Centaur"—durable and profitable, but bound by the slow, deliberate pace of enterprise trust. This distinction is critical for investors and founders alike, as it suggests the current capital flight from security to AI is not a temporary market correction but a rational response to divergent business models.
The Speed Mismatch
Haleliuk anchors his argument in data from Bessemer's "State of AI 2025" report, which highlights a dramatic compression in the time required to reach $100 million in annual recurring revenue (ARR). He notes that while top software companies previously took seven years to hit this milestone, new AI "Supernovas" are achieving it in just 1.5 years. "Supernovas are the AI startups growing as fast as any in software history," Haleliuk writes, quoting the report to illustrate the sheer velocity of the new wave. These companies often sprint from seed funding to massive scale in their first year of commercialization, a feat that would have been unthinkable in the traditional SaaS era.
The author contrasts this with the "Shooting Stars" of the AI world, which grow faster than traditional software but still face some scaling bottlenecks. Yet, even these are outpacing the security sector. Haleliuk points out that the industry is facing a new benchmark where "Q2T3 (quadruple, quadruple, triple, triple, triple) better reflects the five-year trajectory we're seeing from today's AI Shooting Stars," a stark departure from the T2D3 growth patterns that defined the previous decade. This reframing is essential; it suggests that the old metrics for success are no longer just outdated, they are actively misleading for those trying to navigate the current capital landscape.
Security moves with the speed of trust, not the speed of shipping new features.
The Trust Bottleneck
The core of Haleliuk's analysis lies in the distinction between product development and go-to-market (GTM) strategies. He argues that while AI has undeniably accelerated the ability to ship code, it has not—and cannot—accelerate the procurement processes of large enterprises. "In cyber, product is the game of inches, but GTM is the game of miles," Haleliuk paraphrases, emphasizing that the most technologically superior products often lose to better distribution. This dynamic is particularly acute in security, where the stakes involve risk mitigation rather than efficiency gains.
He observes that the very presence of AI in a security product can actually slow down sales cycles. "Interestingly enough, as AI is accelerating the speed of shipping new features, it's actually slowing down the speed of trust," he notes, explaining that enterprises are now scrutinizing how AI models handle data, leading to longer proofs of concept (POCs) and more rigorous vetting. This creates a paradox where the technology meant to speed things up becomes a barrier to entry. Critics might argue that as AI becomes more ubiquitous, these trust barriers will eventually erode, but Haleliuk's point holds weight: the fear of hallucinations or data leakage in a security context is a higher-order risk that buyers will not rush to accept.
The Venture Capital Exodus
Perhaps the most provocative claim in the piece is the prediction that generalist venture capital firms will abandon the cybersecurity sector. Haleliuk suggests that the incentive structures of these funds are misaligned with the reality of security growth. "Generalist VCs operate under a different set of incentives," he writes, noting that they are not bound to a specific vertical and will naturally gravitate toward the "AI Supernovas" that promise faster returns. He illustrates this with a hypothetical comparison: a fintech startup hitting $5 million in ARR in ten months will look far more attractive to a generalist than a security firm on track for $1.7 million in a year and a half, even if the latter is performing exceptionally well within its own context.
This shift mirrors historical patterns in tech, such as the dot-com bubble of the early 2000s, where capital flooded into high-growth, low-trust models before the inevitable correction. Haleliuk warns that unless security budgets expand indefinitely or buyers become less risk-averse, the sector will struggle to compete for attention. "I think we'll see more generalist VCs leaving security," he predicts, arguing that the "tourists" will depart, leaving behind only specialists who understand the long-term value of the space. This natural selection process, while painful, may ultimately strengthen the industry by filtering out hype-driven ventures.
It's not that security startups are bad investments; they're just different, and these differences are structural.
Bottom Line
Haleliuk's most compelling insight is the structural inevitability of the divergence between AI and security growth rates; the industry's reliance on trust and risk reduction makes the "Supernova" trajectory a physical impossibility for most players. The argument's greatest vulnerability lies in its assumption that generalist capital will exit completely, potentially underestimating the sheer volume of money chasing AI that might still spill over into security as a defensive hedge. However, the piece serves as a vital corrective to the hype cycle, urging founders and investors to stop comparing their progress to AI benchmarks and instead embrace the slower, more durable path of the "Cloud Centaur."