đź’ˇ
Thesis: AI-driven vulnerability discovery will increase what clients expect from managed security. The answer is not more reports. It is continuous monitoring, faster remediation, and practical exposure reduction when patching cannot keep up.

MSPs and MSSPs have spent years helping clients answer the same vulnerability management questions from insurers, auditors, and boards: Do you scan? Do you patch? Can you prove it? For a long time, a periodic scan and a prioritized remediation report were enough to show that a vulnerability management program existed.

For years, vulnerability management followed a familiar rhythm: scan the environment, review the report, prioritize by severity, create tickets, patch what you can, and repeat next month or next quarter.

That rhythm is breaking.

Anthropic’s Project Glasswing and Claude Mythos Preview are a strong signal that vulnerability discovery has entered a new era. Anthropic says Mythos has already found thousands of high-severity zero-day vulnerabilities, including flaws across every major operating system and web browser. The broader point is not simply that one AI model found impressive bugs. It is that the cost, speed, and skill required to discover and exploit software weaknesses are changing dramatically.

For MSPs and MSSPs, the implication is immediate and clear. Vulnerability management can no longer be treated as a periodic reporting function. It has to become a continuous risk-reduction service.

To be fair, the industry was already struggling with vulnerability volume before Mythos entered the conversation. FIRST forecasts that 2026 will be the first year to exceed 50,000 published CVEs, with a median forecast of about 59,000 and realistic scenarios reaching 70,000 to 100,000 vulnerabilities. NIST has also moved the National Vulnerability Database to a risk-based enrichment model after CVE submissions increased 263% between 2020 and 2025, with early 2026 submissions nearly one-third higher than the same period the prior year.

But volume is only half the story. The bigger challenge is vulnerability velocity.

VulnCheck found that in the first half of 2025, 32.1% of known exploited vulnerabilities had exploitation evidence on or before the day the CVE was issued. In other words, for a meaningful portion of exploited vulnerabilities, defenders do not get a comfortable “detect, plan, patch” window. The race has already started before the advisory even becomes part of the normal vulnerability management workflow.

This creates two major implications for MSPs and MSSPs.

1. Continuous, real-time vulnerability monitoring is now critical

A monthly scan is no longer enough. In a world where vulnerabilities are discovered, weaponized, and exploited at machine speed, MSPs and MSSPs need continuous visibility across client environments.

That means always-on asset discovery, external exposure monitoring, endpoint visibility, cloud awareness, and real-time correlation with threat intelligence. It also means bringing in better prioritization signals, not just CVSS scores. CISA’s Known Exploited Vulnerabilities catalog helps identify vulnerabilities already exploited in the wild, while FIRST’s EPSS estimates the probability that a published CVE will be exploited within the next 30 days. But even this is not enough for the long-term.

For service providers, this is a business model shift. The value is no longer “we ran a scan and sent you a report.” The value is “we are continuously watching your exposure, correlating it with active threat activity, and telling you what actually matters right now.”

That distinction is becoming even more important for SMB and mid-market clients. They do not have the staff to process hundreds or thousands of findings. They need their MSP or MSSP to convert vulnerability data into operational decisions: which systems are exposed, which vulnerabilities are being exploited, which assets matter most, which patches are safe to deploy, and where compensating controls are needed.

The winners in managed security will be the providers who can turn vulnerability intelligence into client-specific action.

2. The time from detection to remediation has to shrink

The second implication is even harder: detection is not enough.

Most organizations do not get breached because nobody knew a vulnerability existed. They get breached because the window between detection and remediation stayed open too long. For MSPs and MSSPs, that means vulnerability management must connect directly into remediation workflows: ticketing, patch deployment, client approval, maintenance windows, exception tracking, validation, and executive reporting.

But we also have to be honest about the patching speed barrier.

Patching is not always simple, especially for MSP and MSSP clients. Many environments include legacy systems, fragile line-of-business applications, third-party dependencies, remote users, unmanaged assets, and operational constraints. Some patches require testing. Some require downtime. Some vendors are slow. Some systems are end-of-life and cannot be patched cleanly at all.

That is why the next phase of vulnerability management cannot be patch-only. It has to include patchless protection and configuration-based risk reduction.

When vulnerability velocity outpaces patch velocity, configuration becomes the faster control plane.

That may mean disabling a vulnerable service, changing exposed ports, tightening access controls, applying a WAF or IPS rule, enforcing MFA, segmenting a system, blocking exploit paths, removing internet exposure, hardening identity permissions, or using EDR/MDR controls to detect and contain exploitation attempts. These are not replacements for patching, but they are essential ways to reduce risk while patching catches up.

Put simply: when you cannot patch fast enough, you still need to reduce exposure immediately.

This is where MSPs and MSSPs can create real differentiation. A client does not need another report that says “we found lots of critical vulnerabilities.” They need a partner who can answer: “What can we do today to reduce the likelihood of compromise, even before the patch is fully deployed?”

That is the difference between vulnerability management and exposure management.

Project Glasswing should be viewed as a preview of where the market is heading. AI will increase vulnerability discovery. Attackers will benefit from faster research, faster exploit development, and faster chaining of weaknesses. Defenders are still playing catch up; but they can benefit too, though only if they modernize their operating model.

For MSPs and MSSPs, the path forward is clear: first move from periodic scanning to continuous monitoring; move from static severity scores to real-time threat intelligence; move from reporting to remediation; and move from patch-only thinking to patch-plus-protection.

The future of vulnerability management will not be won by whoever finds the most issues. It will be won by whoever helps clients close the exposure window fastest.

That means fixing what can be fixed, protecting what cannot be patched immediately, and proving that risk is going down.