Finimize Analyst Desk

Finimize Analyst Desk

AI Is Forcing A Cybersecurity Reset. Here’s How It’s Changing The Cybersecurity Portfolio

That $15 billion market wipeout has revealed a hard truth: AI will change the game for this industry, too. Here are the changes we’re making to our cybersecurity basket.

Theodora Lee Joseph, CFA's avatar
Theodora Lee Joseph, CFA
Mar 05, 2026
∙ Paid
Cybersecurity portfolio

A year ago, I made a case for cybersecurity as a long-term investment. My argument was simple: it’s the “digital tax” of the modern world. When everything from the power grid to your personal bank account operates online, security is essential. That’s why I liked the space. It felt almost recession-proof. A company might trim its marketing budget or delay hiring, but it’s not about to switch off the digital burglar alarm.

Then, last month happened. The “cybersecurity bloodbath” wiped out more than $15 billion in market value in a matter of days, in a panic triggered by Claude Code Security.

To understand why investors got so nervous, you have to understand what this tool actually represents. For years, AI in security was like having a brighter flashlight – it helped humans spot bugs faster. Claude Code Security is different; it’s an all-in-one autonomous robot repair squad. It finds the hole in your digital fence; it reasons through the problem; and it patches the fence itself.

That raised an uncomfortable question: if AI can automatically fix security problems at the source, why pay millions to specialized cybersecurity vendors? In short, are those firms about to become obsolete?

What threat does AI actually pose to cybersecurity?

AI does pose a real threat – but it’s a threat to certain pricing models and weaker players, not to the existence of cybersecurity itself.

First, there’s the “built-in” threat. If AI coding agents (like Claude Code) can write “perfect” code that fixes its own vulnerabilities, the need for a separate monitoring layer starts to disappear. This is the commoditization risk. If your AI dev tool can scan and patch issues as you build, it’s hard to justify paying a premium for another company whose whole pitch is “we detect threats”.

Then, there’s seat compression. A lot of cybersecurity companies charge per endpoint, per analyst, or per employee. Their revenue scales with the number of human workers interacting with systems. But AI is a labor multiplier. If one AI-augmented Tier 1 security analyst can now do the work of five, ten, or even 20, the number of billable seats declines. Even if the vendor stays in place, the invoice figures shrink.

That said, there’s a massive flip side. While AI disrupts old business models, it creates entirely new demand.

We’re moving into the “agentic era”. For every human online now, there are roughly 82 AI agents (autonomous bots) working away in the background. That creates a massive new attack surface. Hackers can now deploy millions of tiny, sophisticated strikes at once. Humans can’t defend against that manually. The only real counterweight is more AI.

And a build-versus-maintain reality check is happening. Investors feared that companies would use AI to build their own security tools to cut costs. But building a tool is maybe 15% of the work. The other 85% is maintenance – constantly updating for new threats and meeting standards like FedRAMP or SOC 2. AI can generate code, but it doesn’t yet take long-term operational responsibility, navigate regulatory scrutiny, or provide enterprise-grade accountability.

What will separate the winners and losers in the industry?

User's avatar

Continue reading this post for free, courtesy of Finimize.

Or purchase a paid subscription.
© 2026 Finimize Analyst Desk · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture