Anthropic Mythos AI
Anthropic Mythos AI Found 2,000+ Vulnerabilities and Sparked a Global Scramble
Anthropic's Claude Mythos Preview found over 2,000 zero‑day vulnerabilities in seven weeks, including bugs dating back 27 years. The model is too dangerous for public release — but a Discord group already leaked it, and governments worldwide are racing to respond.
What Mythos Found in Seven Weeks
Anthropic's Claude Mythos Preview — a general‑purpose AI model the company deems too dangerous for public release — found more than 2,000 previously unknown software vulnerabilities in just seven weeks of testing. That single finding equals roughly 30% of the world's entire annual output of discovered vulnerabilities before AI entered the picture, according to Fox News.
The model discovered bugs in every major operating system and every major web browser, Anthropic announced on its official Mythos assessment page. Some vulnerabilities had survived decades of human review and millions of automated tests. A 27‑year‑old bug in OpenBSD — an operating system famous for its security focus — allowed remote crashes of any machine running it. A 16‑year‑old flaw in FFmpeg had been hit by automated testing tools 5 million times without detection.
John Ackerly, CEO of data security company Virtru, put it plainly to Fox News: "Mythos didn't pick a lock; it found thousands of locks that were never locked in the first place that no one even knew existed."
Why Anthropic Won't Release It
Unlike every other model Anthropic has shipped, Mythos Preview will not be made generally available. The company is restricting access to approximately 50 vetted organizations including Microsoft, Apple, Amazon Web Services, CrowdStrike, Google, JPMorgan Chase, and NVIDIA, as part of a new initiative called Project Glasswing.
Anthropic's reasoning: the same capability that finds vulnerabilities for defenders can find them for attackers — and the expertise barrier is collapsing. As Ackerly explained to Fox News, "a person with bad intentions and no technical background could potentially use it to cause serious damage."
The model can also chain individually minor flaws into complete attack sequences. According to CNA, it can connect bugs the way a burglar plans a break‑in: "finding that first open window, using it to unlock a door from the inside and then disabling the alarm." Anthropic's own testing showed Mythos autonomously chained Linux kernel vulnerabilities to escalate from ordinary user access to complete machine control.
Over 99% of the vulnerabilities Mythos found remain unpatched, which is why Anthropic is using cryptographic SHA‑3 commitments instead of full disclosure — details will only be revealed after the coordinated vulnerability disclosure process completes.
How Mythos Compares to Previous Models
The jump from Anthropic's previous top model to Mythos is not incremental. According to Anthropic's technical assessment, Mythos Preview scored 83.1% on CyberGym (vulnerability reproduction) versus Opus 4.6's 66.6%. On Firefox JavaScript engine exploits specifically, Opus 4.6 produced 2 working exploits out of several hundred attempts. Mythos Preview produced 181 working exploits plus 29 register controls.
On the OSS‑Fuzz benchmark, Mythos found 10 full control‑flow hijacks on fully patched targets at Tier 5 severity — the most dangerous category. Opus 4.6 and Sonnet 4.6 found zero at that tier. Mythos also demonstrated the ability to reverse‑engineer closed‑source binaries without source code access, making legacy systems with lost or forgotten source code vulnerable too, according to IBM.
Notably, Anthropic says these cybersecurity capabilities were not explicitly trained. "We did not explicitly train Mythos Preview to have these capabilities," the company wrote. "Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy." That's the part that should worry builders — the security capabilities are a side effect of being better at coding generally.
- SWE‑bench Verified Mythos: 93.9% vs Opus 4.6: 80.8% — a 13‑point gap on the standard coding benchmark
- Terminal‑Bench 2.0 Mythos: 82.0% vs Opus 4.6: 65.4% — agentic coding tasks
- GPQA Diamond Mythos: 94.6% vs Opus 4.6: 91.3% — graduate‑level reasoning
- CyberGym Mythos: 83.1% vs Opus 4.6: 66.6% — vulnerability reproduction
The Leak That Wasn't a Hack
Within hours of Anthropic's public announcement, a small group of users on a private Discord server gained unauthorized access to Mythos. According to Fortune, the breach wasn't a sophisticated cyberattack — one group member was a third‑party contractor for Anthropic who already had legitimate access to the model. The group used previously leaked knowledge from AI training startup Mercor to guess the model's location.
The group has continued using Mythos but reportedly avoided launching cyberattacks to stay under the radar. Anthropic confirmed it is investigating the unauthorized access, telling the BBC it came "through one of our third‑party vendor environments."
David Lindner, CISO at Contrast Security, told Fortune the leak was inevitable: "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." His sharper warning: "If some random Discord online forum got access to it, it's already been breached by China."
Governments Worldwide Are Racing to Respond
The Mythos release has triggered an extraordinary wave of government action across three continents. Japan's Finance Minister Satsuki Katayama announced a task force to address cybersecurity risks in the financial sector, telling the Straits Times: "We face a crisis unfolding right in front of us." The task force includes Bank of Japan Governor Kazuo Ueda and top executives of major Japanese banks.
In the UK, the government is negotiating with Anthropic to become the only government outside the US to receive a Mythos preview, delivered via the AI Security Institute, according to Finextra. UK technology minister Liz Kendall and security minister Dan Jarvis wrote an open letter to business leaders about AI cyber threats prompted by the Mythos release.
US Treasury Secretary Scott Bessent summoned Wall Street banking CEOs on April 8, 2026, to brief them on Mythos risks. Germany's Bundesbank chief Joachim Nagel called the model a "double‑edged sword," warning that all relevant institutions should have access to avoid competitive distortions, per Finextra.
The urgency is rooted in a collapsing timeline. According to CNA, the average time between a software flaw becoming public and a working exploit being built dropped from 771 days in 2018 to less than 4 hours today.
Project Glasswing and the Defense Play
Anthropic is trying to flip the Mythos problem into a solution through Project Glasswing, which gives partner organizations Mythos access to find and patch vulnerabilities before attackers do. Launch partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Mozilla has already used a Mythos preview to identify and patch 271 vulnerabilities in Firefox, according to Fortune. Anthropic is committing up to $100 million in usage credits plus $4 million in direct donations to open‑source security organizations through the Linux Foundation and Apache Software Foundation.
CrowdStrike CTO Elia Zaitsev put it bluntly in Anthropic's announcement: "The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI."
Post‑preview pricing for Mythos on the Claude API is set at $25 per million input tokens and $125 per million output tokens, available on Claude API, Amazon Bedrock, and Google Cloud Vertex AI.
What Builders Should Do Right Now
Whether or not you ever touch Mythos directly, its existence changes the threat landscape for every developer shipping software. Here's what matters for builders:
Your code is now being attacked by machines that never sleep. As IBM's Dave McGinnis told IBM Think: "If the attackers aren't humans anymore, the defenders can't be humans anymore either. It's machine speed versus machine speed."
Open‑source dependencies are the softest target. Open‑source software underpins modern systems but is maintained by small teams with limited security resources. Jim Zemlin, CEO of the Linux Foundation, warned in Anthropic's announcement that open‑source code constitutes "the vast majority of code in modern systems, including the very systems AI agents use to write new software."
Basic security hygiene just became urgent. Richard Horne, head of the UK's NCSC, urged organizations to focus on fundamentals: updating software, upgrading legacy IT, and limiting access to vital systems. The old perimeter‑defense model — firewalls, intrusion detection, network monitoring — cannot keep pace with AI‑driven attacks that compress the attack lifecycle from weeks to minutes.
The capability will proliferate. As Anthropic warns, the rate of AI progress means these capabilities will soon proliferate beyond actors committed to deploying them safely. Other frontier AI labs are likely months away from comparable models. The question isn't whether your software will face AI‑driven vulnerability discovery — it's whether you'll find the bugs first or the attackers will.
Apr 26, 2026
Perplexity AI Hit With Privacy Lawsuit While Appeals Court Greenlights Its Bots
Perplexity faces a class-action lawsuit for allegedly sharing user chat data with Google and Meta through hidden trackers. Meanwhile, the Ninth Circuit has paused an injunction blocking its Comet shopping bot from Amazon — setting the stage for a landmark ruling on whether AI agents need user permission or platform permission to operate.
Apr 26, 2026
OpenAI Knew About Shooting Suspects and Said Nothing. Now Altman Is Sorry
OpenAI flagged and banned a mass shooter's ChatGPT account eight months before the Tumbler Ridge attack but chose not to alert police. After a second shooting involving ChatGPT advice, Florida has opened a criminal investigation. The era of AI companies operating without a duty to report is ending.
Apr 26, 2026
Musk v. OpenAI Trial Begins After Fraud Claims Dismissed
Elon Musk's fraud claims against OpenAI were dismissed on April 25, but the trial proceeds Monday on trust and enrichment claims that could reshape AI's corporate governance. Here's what builders need to know.
Related News
Apr 26, 2026
Anthropic's Project Deal: AI Agents Trade Real Goods and the Losers Can't Tell
Anthropic ran a classified marketplace where 69 AI agents traded real goods for a week. Stronger models consistently got better deals — and users on the losing side couldn't perceive the disadvantage. The implications for agent commerce are sobering.
Apr 25, 2026
Google Bets $40 Billion on Anthropic as AI Compute Race Escalates
Alphabet will invest up to $40 billion in Anthropic, with $10 billion upfront and $30 billion more if performance targets are met. The deal secures massive compute capacity for Claude's maker while deepening the complex competitor-partner dynamic between Google and its biggest AI rival.
Feb 19, 2026
Perplexity AI Drops Ads to Win User Trust: A Game-Changer in AI Monetization
Perplexity AI has decided to ditch its advertising model to foster user trust, aligning its business strategy with a subscription-based model. Amid industry giants like Google and OpenAI exploring ad revenue, Perplexity's bold move may be a harbinger of a new monetization era focusing on consumer trust over advertising revenues. This decision highlights a significant shift in the AI industry, where maintaining user trust is increasingly critical. Perplexity's move aims to convince over 100 million users through its subscription services, setting a high bar in an industry where ad-driven models are often the norm. This development poses critical questions for competitors and raises broader implications for AI reliability and governance.