- Mythos and Project Glasswing will help harden the world’s code at AI speed. That’s real progress, and the teams doing it deserve a lot of credit.
Of course conversely it also speaks to the fact models like this are – and will over time – be in the hands of attackers who will use it to exploit countless vulnerabilities in AI speed. - The reality is you must assume vulnerabilities will always exist in production and will always be exploited. This is nothing new. The pace of exploitation – and ideally pace of finding and fixing – will be faster, however.
- Hence, let’s not lose sight of the reality that will always be true and is even more needed now…. defending is never about one layer – it is trite to say, but a defense-in-depth philosophy applies now more than ever.
- One of the most important layers and platforms is real time governance and security that understand the language of the AI and cloud world – it is more important than ever as every agent must be “zero trusted”, just like human users are in a true least privilege modern architecture.
- Ultimately, attackers want your data – and this is a data, identity, and real time transaction and traffic problem – in addition to a code problem. It calls for open (we have never believed there is one platform for all of security and networking) converged platforms built for how work actually happens in 2026.
- In recent Netskope Threat Labs analysis, roughly one in four enterprises had zero policies restricting AI data flow – even as the median organization now runs 60 distinct AI apps.
What Mythos doesn’t touch
From our vantage as the global inline inspection point for AI (and all) traffic across the enterprise processing trillions and trillions of transactions, a clear picture of the current state has come into focus. The median enterprise is now running 60 distinct AI applications; power users are running more than 500. For every gigabyte their people upload to AI tools, they download 4.3 gigabytes back. AI has become a net information-generation engine – and the risk is concentrated in what goes out in addition to what goes in.
Eighty percent of the generative AI apps we score rate Poor for enterprise security on our Cloud Confidence Index. Roughly one in four enterprises have no real-time AI governance policies in place at all.
That is the surface a CISO actually defends on a Monday morning. It is not just the Linux kernel.
Anthropic’s Mythos and the Project Glasswing consortium will do real, important work on the world’s upstream software, including operating systems, browsers, and critical open-source libraries. The patches will flow to every enterprise eventually, and we’ll benefit alongside everyone else. On the other side, current and future models will find these vulnerabilities for attackers and stitch them together into exploits extremely fast.
However – this model and ones that will come in the future will make an entire massive and growing area of security (where Netskope exists) and networking even more critical.
The vector that matters
Picture this, because it is already happening in your environment right now.
An employee installs an AI-powered assistant that summarizes their email, prioritizes their calendar, and drafts their CRM updates. The assistant wires into those systems through the Model Context Protocol (MCP) – the standard quickly becoming the default way agents are connected to enterprise software. Every connection is authorized by the user. Every request authenticates as the user. Every API call lands in a sanctioned destination.
Nothing looks wrong to the endpoint, the SaaS provider, or your identity stack. And yet sensitive data is flowing into an agent your security team has never reviewed, operating at the privilege level of one of your people, subject to the judgment of a non-deterministic model that can hallucinate, be prompt-injected, or simply make a mistake.
A patched browser doesn’t see this. A patched OS doesn’t stop it. A scanner looking for zero-days in open-source libraries would never flag it. This is not a code vulnerability. It is an identity-and-data problem riding authenticated traffic to destinations you have already approved.
Our report flagged MCP directly as an emerging concern, and it is the reason our definition of agentic AI matters now: autonomous systems that execute multi-step tasks via APIs, without direct human intervention per action. When Mythos-class capabilities proliferate – and Anthropic has said plainly they will within 6 to 18 months – the agents arriving inside the enterprise won’t be attacking your edge. They will already be inside, authorized as your users.
For the CIO: this is the productivity moment, too
Every CIO I talk to is being asked the same question by their board right now: “How are we safely adopting AI?”
The wrong answer is “We’re blocking it.” Blocking AI outright loses the talent war, loses the productivity battle, and loses the CEO’s patience. The leaders winning this cycle are answering “Yes, with guardrails”: yes to ChatGPT, yes to Claude Code, yes to Copilot, yes to agents, yes to employees doing the creative work they want to do, inside inline controls that classify behavior and data in motion and enforce contextual policy at the transaction layer, without breaking the end user or agentic experience (rather, accelerating it).
That answer also aligns with where security and networking are heading anyway. The architectures carrying CIOs forward are the ones that let them set policy once, see activity once, and govern identity, data, and traffic as one fabric, so that enabling AI doesn’t mean multiplying the number of places they have to look. When platform convergence and AI enablement compound, the ROI shows up on both the security and the business ledger.
Three questions to ask your team today
1. How many distinct AI destinations saw traffic from our network last week – and what data went to them? If you can’t answer in a single query, you have a visibility problem before you have a risk problem. The median enterprise we observe is running 60 AI apps. Do you know yours?
2. Do our data controls apply to prompts and responses, not just to file uploads? Most legacy DLP was built for attachments and copy-paste events. Prompts are a different modality – and with a 4.3:1 download-to-upload ratio on AI traffic, the risk is concentrated exactly where most policies don’t look.
3. Which of our SaaS or cloud apps have agent or MCP capabilities enabled – and under whose identity is the agent operating? If this question doesn’t have a clean answer today, it will be a board-level question within six months.
The long view
Project Glasswing will harden the world’s software. That’s real progress. But when a CISO asks the next, harder question – What happens when a Mythos-class model is running inside our enterprise, acting on our data, making decisions on behalf of our employees? – there is no patch for that coming.
That answer lives in the real-time world, in the network, in the dynamic communications. It lives in the platform. It lives in the transaction and data layer. And it has to be live and ready before the models or agents or users or robots or anything gets there, not after.
Anthropic recognizes the importance of the right industry leaders working on these challenges together, and we look forward to continued collaboration with Anthropic and other frontier AI developers. As Anthropic put it, “Project Glasswing is a starting point”. No single organization can solve what’s ahead alone. Frontier AI developers, software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play.
At Netskope, we specialize in security and networking for the modern world, reimagined for a world where the most capable agent on your network may be AI you didn’t deploy. That’s the work. That’s always been the work. And we’re rolling up our sleeves, every day.
















Lire le blog