Anthropic built its most powerful model yet, then decided the public couldn’t have it. Here’s what Claude Mythos actually does, why it’s restricted, and what this new era of gated AI means for your work.
TL;DR
On April 7, 2026, Anthropic unveiled Claude Mythos: its most powerful AI model yet. The catch? You can’t use it. Mythos discovered thousands of previously unknown security vulnerabilities in major operating systems on its own, making it too dangerous for public release. Instead, Anthropic launched Project Glasswing, a consortium of 12 companies including Amazon, Apple, and Microsoft, to deploy Mythos in a tightly controlled way. For professionals, this marks the start of a new era: not all AI capabilities will be freely available, and understanding why matters for your AI strategy.
Here’s something that doesn’t happen often in tech: a company builds a product and decides the world isn’t ready for it.
That’s roughly what Anthropic did on April 7, 2026, when it announced Claude Mythos Preview. [1] Described internally as “a step change” in AI capability, Mythos is the most advanced model Anthropic has ever built. It can reason through complex problems, write sophisticated code, and operate autonomously across long, multi-step tasks. And it found thousands of serious security vulnerabilities in virtually every major piece of software humans rely on, working largely on its own, in a fraction of the time human researchers would need.
Anthropic’s conclusion: this one stays gated.
If you’ve been following AI news for the past few years, you’re used to the pattern. A new model launches, it’s more capable than the last one, everyone gets access within a week. Mythos breaks that pattern entirely. And understanding why tells you a lot about where AI is actually heading.
Let me explain what “finding zero-day vulnerabilities” means, because it’s the detail that makes this story significant. A zero-day vulnerability is a security flaw in software that the software’s developer doesn’t know about yet. The name refers to the fact that developers have had zero days to work on a fix.
Finding these flaws is hard, slow, and expensive work. It typically requires teams of expert security researchers, sometimes working for months or years. It’s why major companies run “bug bounty” programmes, paying hackers thousands of dollars per vulnerability discovered.
Claude Mythos discovered thousands of these vulnerabilities across every major operating system and every major web browser, plus a wide range of other critical software. [2] Not in months. Autonomously, as a demonstration of what it could do.
Two examples stand out. Mythos identified a 27-year-old vulnerability in OpenBSD, a security-focused operating system that has had expert eyes on its codebase for nearly three decades. It also found a 16-year-old flaw in FFmpeg, a widely used media processing library, that automated testing had failed to catch across five million test runs. [3]
This is genuinely impressive defensive work. The problem is that the same capability cuts both ways. An attacker with access to Mythos could use it to identify vulnerabilities and build exploits at a scale and speed that would be unprecedented. Anthropic didn’t want to create that risk.
Why this matters for non-technical professionals: The security of every app, platform, and digital service you use depends on humans finding flaws faster than attackers do. Mythos changes that equation significantly in both directions. It could make your software dramatically safer, or dramatically more exposed, depending entirely on who has access to it and under what conditions.
Rather than shelving Mythos entirely or releasing it openly, Anthropic found a third path: a tightly controlled consortium called Project Glasswing.
The 12 founding partners include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks, along with financial institutions and government stakeholders. [4] These organisations have agreed to strict usage terms, auditing, and oversight conditions in exchange for access to Mythos for defensive purposes only.
The basic idea: let the organisations responsible for the world’s most critical software use Mythos to find their own vulnerabilities before attackers can. Patch them. Make infrastructure safer. Don’t let the same model walk out the door and into less careful hands.
It’s worth noting that Anthropic is also in active discussions with the EU about controlled access to Mythos in Europe. [5] This isn’t a forever-gated model. It’s a staged rollout with governance guardrails, which is a meaningfully different approach to AI deployment than we’ve seen before.
Here’s the part most business commentary has glossed over. Mythos isn’t an anomaly. It’s an early indicator of where the entire AI market is heading.
For the past few years, the general assumption has been that AI capabilities cascade downward: frontier labs build powerful models, then make them available to developers and businesses, who build products on top. Everyone eventually gets access to roughly the same underlying capability.
That assumption is starting to break. Mythos is one example. OpenAI’s GPT-5.5-Cyber, a restricted cybersecurity model released days after Mythos, is another. [6] Both companies, within a week of each other, shipped AI capabilities that they explicitly decided not to make generally available.
What’s emerging is a two-tier structure:
This isn’t a temporary situation. As AI gets more capable, the gap between “general use” and “restricted use” categories is likely to widen, not shrink.
If your role doesn’t involve cybersecurity or frontier AI development, you might be wondering why any of this matters to you. It matters for several reasons.
If you’re evaluating AI tools for your team or business, you’re going to start seeing claims about “enterprise-grade” or “secured” AI capabilities more frequently. Some of these will be legitimate (think: Anthropic working with vetted partners under strict conditions). Others will be marketing language attached to standard models.
Understanding that genuinely restricted-access AI exists helps you ask better questions: What model are you actually using? Under what terms? Who has audited the deployment? These aren’t paranoid questions. They’re the kind of due diligence that good AI procurement requires.
Every major operating system and browser represented in the Glasswing consortium will have access to vulnerability-finding capabilities that didn’t exist six months ago. The patches that result from Mythos’s analysis will quietly make their way into the Windows updates, macOS updates, Chrome updates, and iOS updates you install. This is genuinely good news, even if it never makes a headline.
There’s been a lot of talk about AI governance, safety boards, and responsible deployment. Project Glasswing is one of the first real examples of what that actually looks like in practice: a structured, audited, legally bound consortium with defined use cases and oversight mechanisms. Understanding this model helps you think about what AI governance should look like inside your own organisation, particularly if you’re in a regulated industry. We covered the EU AI Act compliance requirements in detail in our compliance guide, and the Mythos situation reinforces why those frameworks exist.
Short answer: no. The Claude available through claude.ai, the Claude API, and business integrations like the one Anthropic built into Salesforce is an entirely separate system from Claude Mythos. Your day-to-day Claude experience is unaffected by any of this.
What Anthropic is doing is essentially running two parallel operations. One is the consumer and enterprise AI business: Claude 3.x models, regular updates, broad access. The other is frontier research and security work, of which Mythos is currently the most visible example.
The useful framing here: Mythos is more like a highly classified research tool than a product. It runs under controlled conditions, for specific defensive purposes, with heavy oversight. It’s not being integrated into any Anthropic product that regular users or businesses can access.
One thing worth noting: Reco.ai (a cloud security firm) pointed out an interesting detail in their analysis. While Mythos itself is restricted, Claude models are already embedded in enterprise software like Salesforce, quietly doing knowledge work inside business applications. [7] That everyday Claude integration is a completely different product running at a completely different capability level.
There’s a temptation to read the Claude Mythos story as a cautionary tale: AI is getting too powerful, even its creators are scared of it. That reading isn’t quite right.
What actually happened is more interesting. Anthropic built something extraordinarily capable, conducted rigorous internal evaluations, shared results with the UK’s AI Safety Institute for independent assessment, [8] and then made a deliberate decision about deployment. They didn’t panic. They planned. That’s what responsible AI development actually looks like, and it’s worth recognising as meaningful progress, even if the headline version sounds alarming.
For business professionals, the practical takeaway is this: the AI tools you can freely access are going to remain powerful and keep improving. But the most capable AI capabilities will increasingly exist in a different category, deployed through structured governance frameworks rather than open access. That two-tier reality shapes how you should think about AI strategy, procurement, and competitive positioning over the next few years.
The companies that understand this distinction early are the ones who’ll be able to have the right conversations with vendors, navigate AI governance requirements with confidence, and build AI strategies that hold up over time.
What is Claude Mythos?
Claude Mythos is Anthropic’s most powerful AI model to date, announced on April 7, 2026. It is too capable of finding and exploiting cybersecurity vulnerabilities to be released publicly. Instead, it is being deployed through a controlled consortium called Project Glasswing, limited to 12 vetted partner organisations.
Why won’t Anthropic release Claude Mythos to the public?
Claude Mythos can autonomously discover thousands of previously unknown (zero-day) security vulnerabilities in major operating systems and browsers. Anthropic determined that making it widely available would create unacceptable risk, since attackers could use it to build cyberattacks at unprecedented scale and speed.
What is Project Glasswing?
Project Glasswing is Anthropic’s controlled deployment programme for Claude Mythos. It includes 12 major technology and financial companies including Amazon, Apple, Cisco, Microsoft, and CrowdStrike. These organisations use the model to find and patch vulnerabilities in critical software before attackers can exploit them.
Does Claude Mythos affect the regular Claude I use at work?
No. The Claude available through claude.ai, the API, and business integrations is a separate, publicly available product. Claude Mythos is a restricted research and security model running under controlled conditions. Your daily Claude experience is completely unaffected.
What does the Mythos situation mean for AI at work in general?
It signals a new era where the most powerful AI capabilities are governed rather than freely distributed. Business professionals should expect more tiered access to AI models: general-use versions for everyday work, and restricted high-capability versions for vetted contexts. Understanding this distinction matters for AI strategy and vendor evaluation.
Sources