
Anthropic’s Mythos Preview and Project Glasswing signal an important shift in offensive cyber capability. According to Anthropic, Mythos can identify and exploit zero-day vulnerabilities in major operating systems and browsers, autonomously chain multiple vulnerabilities, and accelerate N-day weaponization, while Glasswing is a controlled initiative designed to help secure critical software with early access to the vulnerabilities discovered by Mythos.
For defenders, the implication is not simply “patch faster.” It is that some of the friction organizations have historically relied on, such as time, skill, and effort required to turn vulnerabilities into working exploits, may be decreasing. Vulnerability management programs built around backlog reduction, severity sorting, and slow coordination are already under strain. In a world where exploit discovery, exploit development, and exploit chaining compress, those programs become less fit for purpose.
This advisory outlines what changed, why it matters, and how organizations should adapt their vulnerability management programs to improve readiness for AI-accelerated exploitation.
On April 7, 2026, Anthropic published its Mythos Preview research and launched Project Glasswing. Anthropic claims Mythos is capable of identifying and exploiting zero-days, assisting non-experts in creating working exploits, and building sophisticated exploit chains across major platforms. Anthropic also claims that more than 99% of the vulnerabilities it found remain undisclosed because they are not yet patched.
Whether every public claim is eventually validated in the open is not the main point. The strategic shift is clear: exploit development is becoming faster, more scalable, and less dependent on elite (or even on mid-level) operator skill. Anthropic explicitly notes that some security assumptions based on making exploitation tedious rather than impossible may weaken against models with strong offensive cyber capability.
For most organizations, the immediate question is not whether they can detect vulnerabilities. It is whether their vulnerability management program can answer, fast enough:
All of this, of course, is in addition to what organizations simply must be prepared for – a clear, structured, and well-practiced blueprint for handling an incident when discovered, which in today’s era is more complex and faster-paced than ever.
Programs that still optimize primarily around severity, aging, and ticket throughput will struggle. Programs that unify assets and findings, enrich them with business and operational context, and turn prioritization into fast, governed action will be better positioned to adapt.
This is the practical shift from traditional vulnerability management toward broader, continuous exposure management, and from exposure management toward exposure readiness: the ability to identify, prioritize, and reduce the exposures that matter before exploit timelines outrun response.

This should not be interpreted as just another trigger for emergency and/or faster patching. It changes the shape of the defensive problem.
Anthropic’s description points to three important shifts. First, the model can discover subtle and sometimes very old vulnerabilities. Second, it can help build sophisticated exploit chains rather than just identify isolated bugs. Third, it can dramatically lower the skill barrier for exploit development. Together, those shifts compress the distance between exposure and exploitation.
That matters because many vulnerability management programs still operate on an outdated assumption: finding issues is relatively easy, but turning them into reliable exploits remains difficult, slow, or specialized enough that defenders have time to prioritize and respond. In an AI-accelerated exploitation environment, that assumption becomes less reliable.
The core problem is no longer just vulnerability visibility. It is exposure readiness. Organizations need to know not only what exists, but what matters most in their environment when technical exposure is combined with:
What changes in an AI-accelerated exploitation environment
This does not mean every vulnerability suddenly becomes urgent, or that every finding requires immediate remediation. It means programs need a more contextual and operationally grounded way to decide what matters most, and how quickly they can safely act.
Most programs were built to manage volume. They focus on scans completed, criticals opened, criticals closed, MTTR by severity, and aging by ticket class. Those are useful operational metrics, but they are not enough in an environment where exploit timelines compress and exploit chaining becomes easier.
Several familiar weaknesses have become more dangerous in this model.
Backlog-centric operations
A backlog is not a readiness model. Long lists of unresolved findings do not tell security leaders which exposures matter most under real-world exploit pressure. It only creates managerial overhead, frustration from vulnerability lists that never get resolved, and an unmanageable haystack.
Severity-first prioritization
CVSS, vendor criticality, and static exploit feeds were never sufficient on their own. In an AI-accelerated exploitation environment, they become even less useful unless paired with business, operational, and organizational context.
Weak owner resolution
Critical findings without a clearly accountable owner are not just workflow annoyances. They are readiness failures.
Fragmented asset and finding data
If asset inventories are incomplete, findings are duplicated across scanners, or critical assets are poorly mapped to business services, visibility is incomplete and prioritization remains theoretical.
Slow remediation coordination
Even when a team knows what matters, handoffs across security, infrastructure, engineering, and business owners often introduce days of delay. In a compressed exploit environment, coordination overhead becomes part of the exposure.
These themes also align with the broader modernization of exposure management, where the bottleneck has shifted from basic detection toward contextual prioritization, remediation assistance, and outcome-based reduction of exposure.

Organizations do not need to rebuild their vulnerability programs from scratch. But they do need to evolve them in several important ways.
1. Make contextual prioritization the center of the program
Exploitability matters, but it is only one dimension of risk.
Programs should prioritize exposures using a broader decision model that combines:
The real question is not whether something looks severe in theory. It is whether it matters in this environment, to this business, at this moment, and whether the organization can act on it in time.
2. Elevate context to a first-class requirement
Programs need a consistentway to attach technical findings to:
Without that, prioritization remains generic. In a faster exploit environment, incomplete context is not just an analytics problem. It becomes an execution problem.
3. Measure readiness, not just throughput
Executive reporting should shift away from “how many criticals were closed” and toward questions like:
This change matters becauseactivity metrics can create a false sense of progress while critical exposuresremain unresolved. Outcome-based and readiness-based reporting better reflectsactual defensive posture.
4. Build remediation readiness, not just remediation workflows
As exploit timelines compress, vulnerability management programs cannot stop at better triage. They also need to reduce the time between decision and action.
That does not mean blindly automating fixes. It means preparing the program to support governed, high-confidence remediation workflows that can move faster and require less coordination when the risk, scope, and remediation path are well understood. In many cases, actions may include patching, configuration changes, compensating controls, segmentation, access restrictions, or other bounded response measures.
Organizations should assess whether they are ready to:
In this model, automation is not a replacement for human judgment. It is a force multiplier for mature programs that already have the visibility, context, ownership clarity, and governance needed to act safely under pressure.
5. Revisit accepted risks and exceptions
Risk acceptances and patch deferrals often rely on implicit assumptions about exploit effort, operational likelihood, or the expected speed of attacker adaptation. Some of those assumptions may no longer hold.
Organizations should review:
6. Strengthen incident response readiness
Even with improved prevention and prioritization, incidents should be treated as inevitable and nowadays we should expect them to happen more often. The effectiveness of an organization’s response depends on how well it has been defined, practiced, and adapted to a faster and more complex threat landscape.
Organizations should review:
Security leaders should consider taking the following steps now:

From an incident response standpoint, Mythos does not mean every organization will suddenly face a novel AI-native campaign tomorrow. It means that some of the friction defenders have historically benefited from is weakening, and programs that already struggle with coordination and decision latency may find those weaknesses exposed more quickly.
In real incidents, organizations rarely fail because they lack a scanner. They fail because they cannot rapidly determine whether a technical issue matters in their environment, cannot connect exposure to business consequence, cannot resolve ownership cleanly, or cannot coordinate action across infrastructure, engineering, identity, and business teams under time pressure. Exception debt, unclear compensating controls, fragmented asset context, and slow scope validation routinely become part of the problem.
That is why vulnerability management programs now need to evolve in two directions at once:
The right response is not to panic. It is program evolution:
Organizations that evolve their programs this way will be better prepared not only for Mythos, but for the broader class of model-assisted offensive capability that will follow.
Vulnerability Management Program Assessment Powered by Tonic
To help organizations prepare for compressed exploit timelines, Sygnia and Tonic provide an Exposure Readiness Assessment focused on how vulnerability management programs must evolve for AI-accelerated exploitation.
The assessment is designed to answer a practical question:
Is your vulnerability management program ready to identify, prioritize, and reduce the exposures that matter before exploit timelines outrun your response?
The assessment evaluates:
Asset Coverage
How much of the relevant enterprise attack surface is actually represented?
Finding Coverage
How complete, normalized, and usable is the finding corpus across scanners and exposure sources?
Context Coverage
How much of that scope is enriched with the business, organizational, operational, and adversarial context needed to make sound decisions?
Prioritization Readiness
Can the organization distinguish what is urgent in theory from what matters most in practice?
Remediation Readiness
Can the program translate prioritization into fast, coordinated, and appropriately governed action without losing control, accountability, or validation?
Program Bottlenecks
Where are the systemic weaknesses that will slow risk reduction when exploit timelines compress?
Deliverables
To assess whether your current vulnerability management program can keep pace with compressed exploit timelines, identify the bottlenecks between prioritization and action, and benchmark your readiness across context, ownership, and remediation coordination, schedule an Exposure Readiness Assessment with Sygnia and Tonic.