AI & Emerging Threats

We Didn't Start the Fire. We Have to Put It Out.

Assaf Keren|April 13, 20268 min read

When Anthropic announced Project Glasswing and Claude Mythos Preview last week, the security industry did what it always does with a major capability announcement: it reached for the superlatives. Unprecedented. Game-changing. A step function in offensive AI capability.

I want to offer a different frame.

Mythos is significant. And the broader AI transformation it represents is genuinely one of the most consequential shifts the security industry has ever faced. But Mythos itself is not the moment the game changed. That moment came roughly two years ago, and most organizations still haven't fully reckoned with it.

We didn't start this fire. The generative AI revolution that began in 2023 lit it. But we are the ones who will have to put it out. And most organizations are still debating whether to call the fire department. I've written before about how the real AI threat is organizational inaction. That argument has only gotten more urgent.


What Actually Changed, and When

To understand where we are, you have to go back to the transformer architecture that Google introduced in 2017. That paper laid the foundation for a generation of models that could reason about code, understand context across large systems, and generalize across domains in ways that previous approaches couldn't. But the practical security implications didn't arrive with the research. They arrived when the models did.

The generative AI explosion of 2023 is when the equation changed. ChatGPT had already demonstrated that large language models could reason about code at a level that surprised even their creators. GPT-4, Claude, and Gemini followed rapidly, each more capable than the last. What had been theoretical, that AI could find and reason about vulnerabilities at scale, became demonstrable and then deployable within a matter of months.

By late 2023 and into 2024, the tooling was accessible to anyone. Open-source frameworks. Commercial scanners with AI built in. Coding assistants that would help you understand an exploit if you asked the right question. Security researchers were publishing proof-of-concepts. The window between vulnerability disclosure and weaponized exploit, which used to be measured in months, began collapsing. Not because of a single model. Because of an entire ecosystem of increasingly capable tools that sophisticated adversaries could access, adapt, and deploy.

Mythos Preview is the latest and most capable point on that curve. It found a 27-year-old flaw in OpenBSD. A 16-year-old bug in FFmpeg that automated tools missed five million times. Linux kernel vulnerabilities chained together for full machine compromise. All autonomously, without human steering. That is a real capability leap. It is also what comes next on a curve that has been steep for two years.

The point is not to minimize what Mythos is. The point is that organizations that were waiting for a headline to tell them when to act have already missed two years of that curve.


The Timeline Misconception

Every time a major capability announcement lands, the industry frames it as a future risk. How long before adversaries have access to this? How long before it's weaponized at scale? The implicit assumption is that we are in a preparation window.

That assumption has been wrong for years, and it is more wrong now.

Sophisticated adversaries do not need Mythos to operate inside the windows that most organizations' patch cycles were designed for. Criminal groups with real resources have been using AI-assisted tools for reconnaissance, vulnerability discovery, and exploit development for years. Nation-state actors with long time horizons have been ahead of this curve since before most organizations started their AI governance programs.

The urgency is not about Mythos proliferating to bad actors. It is about the fact that the capabilities bad actors already have are sufficient to exploit the posture most organizations are currently running. Mythos will make that worse. But waiting for Mythos to be the reason to act is like waiting for the accelerant to arrive before deciding the building is on fire.


Two Horizons, Two Different Problems

The right way to think about what to do is to separate two distinct horizons, because they demand fundamentally different responses and different kinds of leadership decisions.

The first horizon is the next three to six months. This is an execution horizon, not a planning horizon. The most important thing organizations can do in this window is adopt AI-powered security tooling now: not to evaluate it, not to run another pilot, but to deploy it operationally. Use it to find your own highest-severity vulnerabilities before adversaries find them first. Instrument continuous control monitoring to replace point-in-time assessments that are already stale by the time they are finished. If your security operations are still running on pre-AI tooling, you are bringing a fundamentally different set of capabilities to a fight where your adversaries have already upgraded.

This is not a future recommendation. It is a present one. The organizations that will navigate the next twelve months well are the ones using AI defensively right now.

The second horizon is the next 18 to 24 months. This is the timeline for architectural transformation: the deeper structural work that cannot be done in weeks and cannot be bought off the shelf. Immutable infrastructure. Automated remediation pipelines built for the velocity the threat demands. AI-augmented detection and response that closes the loop faster than human-scale processes can. These changes require planning, resourcing, and organizational commitment that starts now even though the work plays out over time. They also raise serious questions about what these shifts mean for the workforce pipeline that we cannot afford to defer.

The connection between the two horizons is not sequential. The short-term operational adoption of AI security buys the time and credibility to execute the longer architectural transformation. Organizations that skip the first step because they are focused on the strategic roadmap will find that the roadmap never gets traction.


Design Phase Security Is No Longer Optional

There is a root cause underneath all of this that the Mythos conversation has mostly avoided, and it is worth naming directly.

Most organizations are not in trouble because they have been negligent. They are in trouble because of a maximizing approach to software development that has been the industry default for decades. Build everything. Integrate everything. Keep everything running. Add capabilities, add dependencies, add services, add APIs. The assumption has always been that more is better and that security can be layered on top once the product ships.

The result is attack surfaces that are genuinely enormous full of software organizations forgot they were running, dependencies they didn't know they had, integrations that made sense at the time and are now nobody's specific responsibility. When a model can autonomously scan that surface and find vulnerabilities that humans missed for decades, the size of the surface matters enormously.

Two things need to change structurally.

The first is that security has to move to the design phase. Not shift left in the code review sense, but genuinely upstream into the architectural decisions, the technology choices, the dependency selections, the integration designs. Threat modeling and security requirements belong before the first line of code is written, not after the vulnerability is found. Every organization that is still treating security as a review gate at the end of the development process is generating technical debt that AI-assisted adversaries will eventually find and exploit.

The second is minimization. Use only what you need. Expose only what is required. Retire what is no longer necessary. The vulnerabilities that AI finds first are almost always in software that organizations forgot they were running. The attack surface you don't have is the attack surface that can't be exploited. This sounds simple. It requires deliberate, ongoing decisions to resist the pull toward accumulation that is built into most development cultures.

Neither of these is a new idea. Security teams have been advocating for both for years. What has changed is the cost of ignoring them. In an environment where AI-assisted adversaries can systematically scan everything, the gap between organizations that build security in and organizations that bolt it on is no longer a risk management consideration. It is an existential one.


What This Means Right Now

Project Glasswing is the right response to this moment. A coordinated effort to put frontier AI capabilities to work for defenders before they proliferate broadly to adversarial actors is exactly the kind of collective action the industry needs, and the coalition behind it is serious.

But the defensive advantage only goes to organizations that are moving. The model finds your vulnerabilities. You still have to fix them at a pace that matches the threat, not the pace that fits your change management calendar.

The security leaders who will come out of this period with credibility and capability are doing three things: deploying AI security tooling operationally right now, building the architectural foundations that make remediation fast, and applying the principle of minimization to reduce the surface that needs defending in the first place.

The transformer revolution started in 2017. The practical security inflection point arrived in 2023. Mythos is not the beginning of this story. It is a marker of how far we have traveled in two years, and how much further this will go.

The fire didn't start last week. It started in 2023. But it is burning faster now. And the organizations that are still treating this as a future problem are running out of time to change their minds.