Date: April 8, 2026. Anthropic has just done something unprecedented in the history of AI: it introduced a model that it publicly declared to be too dangerous to be available to anyone — and simultaneously deployed it in practice. It's called Claude Mythos and comes as part of an initiative named Project Glasswing.
What is Project Glasswing?
Project Glasswing is a multifaceted initiative aimed at leveraging the most advanced AI capabilities to defend critical software infrastructure. Eleven of the world's largest technology and security giants have joined the project: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and Nvidia. Over forty other organizations responsible for key digital infrastructures have also joined.
It is essentially an organized, controlled attempt to demonstrate how AI can function as a defensive tool — while transparently highlighting the risks it brings. Anthropic does not conceal what Mythos can do. On the contrary: it describes its capabilities very openly, precisely so that the industry understands how serious the situation is becoming.
Claude Mythos: what the model can do
Claude Mythos Preview is a model with exceptional capabilities in cybersecurity. According to the results of Anthropic's own benchmark, CyberGym, Mythos achieved a score of 83.1% in its ability to reproduce and exploit vulnerabilities — compared to 66.6% for its predecessor, Claude Opus 4.6. This sounds like a number, but in practice it means: the model is capable of independently — without human guidance — sifting through millions of lines of code, finding weaknesses, and writing functional exploits.
In recent weeks alone, Mythos Preview has identified thousands of zero-day vulnerabilities in all major operating systems and web browsers. Among the most notable findings:
- A 27-year-old vulnerability in OpenBSD — a security-hardened operating system considered one of the most secure in the world. The flaw in TCP SACK packet management existed since 1999.
- A 16-year-old bug in FFmpeg — despite automated tools not finding it in hundreds of millions of tests, Mythos detected it by analyzing the code.
- A 17-year-old vulnerability in FreeBSD NFS — allowing remote code execution, fully autonomously without any hints.
- Chaining of vulnerabilities in the Linux kernel leading to privilege escalation.
- Complex exploits in browsers combining JIT heap spray with cross-origin policy bypasses.
The comparison with its predecessor is striking: in testing JavaScript exploits in Firefox 147, Mythos achieved 181 successful attempts, while Opus 4.6 only managed 2. In the OSS-Fuzz test, Mythos found 595 program crashes compared to approximately 250 for Opus. The model also handles reverse engineering of closed-source code and independently creates functional exploits from the analysis phase to a fully functional attack tool.
Why the public won't get the model
This is the core of the whole story. Anthropic explicitly states: Claude Mythos in its current form will not be publicly available. The reason? The model poses "unprecedented cyber risks" and could dramatically increase the capabilities of attackers to carry out large-scale AI-driven cyberattacks.
It's an admission that has few parallels in the tech industry. The company says: we've created something so powerful that we can't just release it into the world. And this is despite the fact that the model was primarily developed for defense.
Another important detail: 99% of the vulnerabilities Mythos has found so far have not yet been patched. Responsible disclosure is ongoing — but the grim reality speaks for itself.
Funding and access
Anthropic is investing significant resources into the initiative. It offers companies and organizations involved in the project 100 million dollars in credits for using the model. At the same time, it is donating 2.5 million dollars to the Alpha-Omega and OpenSSF associations through the Linux Foundation and another 1.5 million dollars to the Apache Software Foundation.
After the preview phase ends, Mythos will be available via Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry at a price of 25 dollars per million input tokens and 125 dollars per million output tokens — with access limited only to vetted organizations managing critical infrastructure.
Anthropic has promised to publish a report on patched vulnerabilities and recommendations for the security industry within 90 days.
What this means for cybersecurity — and for us
Project Glasswing is effectively an answer to a question that has been hanging in the air in the security community for at least two years: what happens when AI surpasses the capabilities of human hackers? Anthropic suggests that moment has arrived. The model "outperforms all but the most capable human experts in finding and exploiting vulnerabilities" — those are literally their own words.
For the Czech and European scene, this has concrete implications. The EU AI Act, which came into full effect in February 2025, does not yet explicitly regulate models deployed in closed security programs of this type — but Project Glasswing is precisely the type of high-risk deployment that European regulators will certainly scrutinize in the near future. The question of governance around such models will soon be on the agenda in Brussels and Prague.
For companies and organizations managing critical infrastructure in the Czech Republic, the project is currently not accessible — no European partners have been named (with the exception of global giants like Google or Microsoft). However, Anthropic announces an expansion of the circle of organizations.
Mythos as a signal, not just a tool
It's easy to see Claude Mythos as just a technical product. But its true impact is symbolic: for the first time in AI history, a company voluntarily acknowledged that it created a model that must not be freely available — and instead of concealing it, openly described it, engaged the industry, and called for preparation. Glasswing is not just a security program. It is a signal that the era of AI as both a defensive and offensive tool in cybersecurity has truly begun. And those who do not understand this in time may soon be looking for holes in their code — alone, without AI, and with a delay.
Will Claude Mythos ever be available to regular users or companies in the Czech Republic?
In its current form, Anthropic plans to make Mythos available only to selected organizations responsible for critical infrastructure. In the future, the company wants to "safely deploy Mythos-class models at a larger scale" — but only after sufficient security guarantees are in place. For Czech companies, no direct access has been announced yet.
How much more dangerous is Claude Mythos than commonly available AI models?
The CyberGym benchmark shows a score of 83.1% compared to 66.6% for Claude Opus 4.6 — but numbers don't tell the whole story. The key difference is autonomy: Mythos doesn't need human guidance; it independently sifts through code, identifies vulnerabilities, and writes functional exploits. In Firefox 147, for example, it managed 181 successful exploits where Opus 4.6 only managed 2. This is a qualitative leap, not just a quantitative one.
What is a zero-day vulnerability and why is it so dangerous that AI finds them automatically?
A zero-day is a security flaw in software that the manufacturer (as yet) doesn't know about — and therefore no patch exists. An attacker who knows about it can attack the system without the defense having a chance to react. If AI finds thousands of such flaws autonomously, the balance of power between attackers and defenders radically changes: previously, finding a zero-day hole was a matter of months of expert work; now it can be hours with AI.