← Back to Library

What we read this week

Daniel Kelley returns to the cybersecurity discourse with a jarring diagnosis: the industry is not merely adapting to artificial intelligence; it is being rendered obsolete by its own tools. The piece's most startling claim is that the traditional security market—built on selling software licenses and human consulting hours—is facing an imminent extinction event because AI agents can now perform the work of both the developer and the analyst. Kelley argues that we have entered a phase where the gap between recognizing a problem and having a solution is wider than at any point in the last quarter-century.

The Governance Vacuum

Kelley opens by highlighting a chaotic landscape where "AI has taken over the world and cybersecurity is no different." He points to a paradox at the recent RSAC 2026 conference: while eleven keynote speakers agreed on the necessity of securing AI agents, they offered "zero ways to actually do it." This creates a scenario where "everyone knows the house is on fire but nobody can find the extinguisher." The author's framing here is effective because it moves beyond hype to expose a critical operational blind spot. The industry is scrambling to secure code that is being generated at a scale and speed that human oversight cannot match.

What we read this week

The core of Kelley's argument regarding code security is that the problem is not just bad code, but the loss of context. He writes, "The problem is that AI writes code at scale, without context, and without the institutional memory that a human developer carries about why certain patterns exist in a codebase." This is a crucial distinction. Traditional tools scan for known vulnerabilities, but they cannot understand the intent behind a piece of code generated by an algorithm. As Kelley notes, "The governance challenge isn't about scanning output. It's about understanding intent, provenance, and drift."

Critics might argue that human oversight will eventually catch up to these tools, but the math presented by Kelley suggests otherwise. He cites Andrej Karpathy's observation that we are shifting from humans writing code with AI assistance to "AI writing code with human oversight." When AI generates code 100 times faster than a human can review it, the bottleneck becomes the reviewer, not the creator. Kelley warns that "you cannot manually review AI-generated pull requests at the rate they're being created," a reality that renders current security review processes "already underwater."

The security industry spent decades learning to secure code humans write. We now have approximately twenty months to figure out how to secure code that nobody wrote.

From Tools to Outcomes

Kelley pivots to the business model implications, drawing heavily on a thesis from Sequoia Capital that suggests a fundamental shift from "software as a service" to "service as software." The author argues that the next trillion-dollar company will not sell a dashboard but will sell the outcome. This reframes the entire value proposition of cybersecurity vendors. If an AI agent can triage alerts, investigate incidents, and write detection rules, the massive managed security services market (MSSPs) is no longer a services market; it becomes a software market.

The author writes, "The next war in cybersecurity isn't over features. It's over which vendors use agents to eliminate the need for services entirely." This is a provocative stance that challenges the revenue models of established players. Kelley suggests that vendors who fail to make this transition will become "commoditized infrastructure underneath someone else's agent." The argument is compelling because it aligns with the broader trend of automation replacing labor-intensive processes, yet it carries a significant risk: if the AI agents make mistakes, the liability shifts from the human consultant to the software vendor, a legal and operational minefield that the article touches on but does not fully explore.

The Collapse of Categories

Perhaps the most insightful part of Kelley's commentary is his analysis of how AI is breaking traditional market categories. He references Frank Wang's thesis that "AI-native security companies don't fit into existing market categories, and Gartner's Magic Quadrants are going to look increasingly absurd trying to classify them." When a single product can perform continuous pentesting, automated remediation, and compliance reporting, asking "is this a vulnerability management tool or a GRC platform?" becomes the wrong question.

Kelley observes that "the map no longer matches the territory." The industry is stuck in a "twenty vendors, zero standards" phase, reminiscent of cloud security in 2016. The danger for investors and buyers is relying on outdated frameworks. As Kelley puts it, "Next time a vendor tells you they're the 'leader' in a Gartner category, ask them which category they'll be in when that quadrant doesn't exist anymore." This is a sharp critique of the analyst industry's inability to keep pace with technological convergence. While the argument is strong, it assumes that the market will naturally consolidate around process flows rather than specialized tools, a transition that could be messy and slow.

The gap between 'we know this is a problem' and 'we have a plan' is the widest I've seen in twenty-five years. That gap is also where every interesting company in 2026 is being built.

Bottom Line

Daniel Kelley's piece is a vital wake-up call for an industry that has been slow to recognize the magnitude of the shift from tools to autonomous agents. Its greatest strength is the clear articulation of the "governance vacuum"—the realization that we are deploying powerful AI agents without the frameworks to secure them or the workforce to review them. The argument's vulnerability lies in its assumption that the market will rapidly adapt to this new reality; the transition from human-led security to agent-led security may be far more chaotic and risky than the timeline suggests. Readers should watch for which vendors successfully pivot to selling outcomes rather than features, as these will likely define the next decade of the industry.

Sources

What we read this week

by Daniel Kelley · The Cyber Why · Read full article

We’re BACK! Yep that’s right, after nearly 18 month away we’ve decided to pick up the pen and get back after it. The cyber world is a completely different place then when you last saw us. AI has taken over the world and cybersecurity is no different. We’ve been staring at the newsfeed for the last seven days and every single article is about AI. AI agents doing pentesting. AI agents replacing SaaS. AI agents that need their own security stack. Sequoia is telling us the next trillion-dollar company sells work, not tools. Karpathy is telling us engineers are irrelevant to their own workflows. And eleven keynote speakers at RSAC 2026 all agreed on exactly one thing: we need to secure AI agents, all while agreeing on exactly zero ways to actually do it. It’s giving “everyone knows the house is on fire but nobody can find the extinguisher” vibes. We’re glad to be back and we hope you love the new content - more coming soon!

[AI + Security].

AI Code Security: Enterprise Governance for AI Generated Code (Latio)

James Berthoty over at Latio dropped a killer piece that should be required reading for every CISO trying to figure out what to do about the influx of AI-generated code flooding their repositories. We're watching a brand new security category emerge in real time, AI Code Security, and it's distinct from traditional SAST, DAST, or SCA. The problem isn't that AI writes bad code (though it does). The problem is that AI writes code at scale, without context, and without the institutional memory that a human developer carries about why certain patterns exist in a codebase.

The governance challenge isn’t about scanning output. It’s about understanding intent, provenance, and drift. When a junior dev uses Cursor to generate an authentication module, who owns the security posture of that code? The dev who prompted it? The AI that wrote it? The platform team that approved the model? Traditional AppSec tooling wasn’t built for this question because the question didn’t exist eighteen months ago. The companies that figure out AI code governance first (think policy engines that sit between the model and the merge request) are building the next foundational layer of the DevSecOps stack.

The security industry spent decades learning to secure code humans write. We now have approximately twenty months to figure out how to secure code that nobody wrote.

FOR ...