RSSUpdated yesterday
NSA Uses Anthropic's Mythos AI Despite Pentagon Blacklist

AI Cybersecurity Clash

NSA Uses Anthropic's Mythos AI Despite Pentagon Blacklist

The National Security Agency is using Anthropic's powerful Mythos AI model for cybersecurity tasks despite the Pentagon labeling the company a supply chain risk, exposing a fundamental contradiction in US government AI policy that signals how critical defensive AI capabilities have become.

The NSA's Quiet Defiance

The United States National Security Agency is using Anthropic's Mythos Preview AI model despite the Department of Defense officially labeling the company a supply chain risk, Axios reported on Sunday. Two sources confirmed the NSA's use of the model, while one said Mythos was being used more widely within the department — the same department that insists Anthropic's technology threatens US national security.

The contradiction is stark: the Pentagon moved in February to cut off all federal agencies from Anthropic's tools, and Defense Secretary Pete Hegseth formally designated the company a supply chain risk in March. Yet the NSA, which falls under the Pentagon's umbrella, is now broadening its use of Mythos. Reuters confirmed that Anthropic, the NSA, and the Department of Defense did not respond to requests for comment.

It is unclear exactly how the NSA is deploying Mythos, but other organizations with access are using it predominantly to scan their own networks for exploitable security vulnerabilities. Anthropic restricted access to roughly 40 organizations, contending that its offensive cyber capabilities were too dangerous for wider release. The company publicly named only 12 of those participants.

What Makes Mythos Different

Mythos Preview represents a genuine leap in AI cybersecurity capabilities. According to a 245‑page technical document released by Anthropic, the model found critical faults in every widely used operating system and web browser — and 99% of those vulnerabilities have not yet been patched. The UK's AI Security Institute, which was granted early access, found the model succeeded in expert‑level hacking tasks 73% of the time. Prior to April 2025, no AI model could complete those tasks at all.

The model excels at identifying "exploit chains" — sequences of vulnerabilities that can be combined to deeply compromise a target. As Wired reported, security researcher Niels Provos noted that Mythos "changes the required skill level to find these vulnerabilities and exploit them." Alex Zenla, CTO of cloud security firm Edera, who is typically skeptical of AI claims, told Wired: "I do fundamentally feel like this is a real threat."

Instead of a public release, Anthropic launched Project Glasswing, a cybersecurity initiative providing limited access to organizations including Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, and Nvidia. The model scored 31 percentage points higher than its predecessor, Opus 4.6, on the USAMO 2026 Mathematical Olympiad — a grueling two‑day proof‑based competition.

How the Government Got Here

The Pentagon‑Anthropic rift traces back to February, when CEO Dario Amodei refused to allow the military to deploy Anthropic's models for autonomous lethal attacks or mass domestic surveillance of Americans. Politico reported that Defense Secretary Hegseth gave Amodei a deadline to relent, and when he refused, Trump directed all federal agencies to "immediately cease" using Anthropic's technology. Trump called the company's leadership "left wing nut jobs" on social media.

But the ban has been quietly circumvented across the government. The Commerce Department's Center for AI Standards and Innovation is actively testing Mythos, according to four people familiar with the matter. Staff from at least two large federal agencies have reached out to Anthropic about integrating the model into their cyber defense efforts. Staff on at least three congressional committees have requested briefings. One congressional aide told Politico the Pentagon "shot itself in the foot by giving the middle finger to the most capable AI provider."

On Friday, Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent in what both sides described as a productive discussion about collaboration and AI safety. When asked about the meeting, President Trump responded "Who?" and said he had "no idea."

Global Alarm and Acceleration

The concern extends well beyond US borders. The UK government issued an open letter to business leaders warning that AI cyber capabilities are "accelerating even faster than had been previously envisaged." The UK's AI Security Institute now assesses that frontier model capabilities are doubling every four months, compared to every eight months previously. German banks are consulting authorities about Mythos risks, and the Bank of England is conducting its own tests.

The UK's National Cyber Security Centre, part of GCHQ, has access to Mythos through the AI Security Institute and has already briefed business leaders on defensive preparations. The letter from UK cabinet ministers Liz Kendall and Dan Jarvis urged every business to treat AI‑driven cyber threats as an immediate priority, not a future concern.

Why Builders Should Care

The NSA's quiet adoption of Mythos despite an official ban sends a clear signal to anyone building with AI: defensive cybersecurity tools powered by frontier models are no longer optional — they are essential infrastructure. If the US government is willing to contradict its own policy to access these capabilities, the competitive advantage for builders who integrate AI‑powered security testing into their workflows is real and immediate.

For developers, this means three things. First, AI‑driven vulnerability scanning is becoming table stakes — tools that can find exploit chains in your code before attackers do will soon be expected, not novel. Second, the supply chain risk designation failed to stop adoption because the capability gap was too large to ignore; the same dynamic will play out in the private sector. Third, Anthropic's Project Glasswing model — restricted access with defensive‑only use — could become a template for how powerful AI tools are distributed going forward, which affects how builders plan their integrations and dependencies.

The frontier is moving fast. As Wired noted, Anthropic's own red team lead Logan Graham said: "This is an issue that involves all of the model developers. Our goal here is just to kick things off." The capabilities Mythos demonstrates today will be widely available in other models within months. The builders who start preparing now will be the ones who are ready when that happens.

Share this article

PostShare

More on This Story

Related News