Updated 3 days ago
Pentagon Signs 8 AI Companies for Classified Contracts — Anthropic Remains Blacklisted

Defense AI Contracts

Pentagon Signs 8 AI Companies for Classified Contracts — Anthropic Remains Blacklisted

The Pentagon signed AI agreements with Google, Nvidia, OpenAI, Microsoft, AWS, Oracle, SpaceX, and Reflection AI for classified networks. Anthropic remains blacklisted as a supply‑chain risk after refusing to accept a 'lawful use' standard — the first American company to receive that designation.

Eight Companies Get the Keys to the Pentagon's Most Sensitive Networks

The Pentagon has signed agreements with eight AI companies to deploy their technology across its most sensitive military networks — secret and top‑secret environments. The list includes Google, Nvidia, Reflection AI, SpaceX, OpenAI, Microsoft, Oracle, and Amazon Web Services, according to Orbital Today. All eight have agreed that their tools can be used for any purpose the military deems lawful.

That final clause — "any purpose the military deems lawful" — is precisely why one name is conspicuously absent from the list. Anthropic, the creator of Claude, is not among the eight. And the reason traces back to a principled stand that has evolved into the most significant AI ethics dispute between a private company and the U.S. government.

The Pentagon's primary AI platform, GenAI.mil, has already been used by over 1.3 million Department of Defense personnel despite running for just five months — a figure that underscores how deeply AI has embedded itself in daily military operations.

Why Anthropic Refused — and Got Blacklisted

Anthropic rejected the Pentagon's "lawful use" standard earlier this year, citing concerns about domestic mass surveillance and fully autonomous lethal weapons. The company's ethical safeguards — which have been core to its brand since inception — prohibit exactly the kind of unrestricted military deployment that the Pentagon's standard requires.

The Pentagon's response was swift and notable. In March, it designated Anthropic a supply‑chain risk — the first time an American company has received that label, per Orbital Today. As a result, its products were barred from Pentagon systems and contractors. Anthropic sued shortly afterward. Its tools, however, remain embedded in some classified networks, and the military has given itself six months to remove them.

Complicating matters further, many Pentagon staff have told Reuters they view Anthropic's products as superior to the alternatives now available and are reluctant to comply with the removal order. The dispute has created a situation where the military is actively removing tools that its own personnel consider most capable available.

The Mythos Complication: A Cybersecurity Model Too Dangerous to Share

The dispute has been further complicated by Anthropic's latest model, Mythos, which focuses on advanced cybersecurity. Mythos has demonstrated the ability to find vulnerabilities in hardened software — including a 27‑year‑old bug in router software used worldwide, as reported by The Sydney Morning Herald. The model alarmed U.S. officials and financial institutions with its capabilities.

Pentagon Chief Technology Officer Emil Michael told CNBC on May 1 that Anthropic remained a supply‑chain risk and called Mythos a "separate national security moment" — a phrase that suggests the Pentagon views Mythos not just as a competitive concern but as a potential threat. Meanwhile, the White House has begun leading reconciliation efforts between Anthropic and the Pentagon, CryptoBriefing reports, exploring reintegration of Mythos for cybersecurity purposes despite continued Pentagon opposition.

Many enterprises have gained access to a Mythos preview to help secure their systems against future cyberattacks. Whether the Pentagon is part of that program remains unclear.

A New Generation of Defense AI Suppliers Emerges

Since the Anthropic fallout, the Pentagon has significantly accelerated how it brings in new AI providers. Integration onto classified tiers now takes under three months, down from 18 months or longer previously. Among the eight partners is Reflection AI, a two‑year‑old startup that has yet to release a publicly available model. The firm raised $2 billion in October and reportedly counts Nvidia among its backers. It is also supported by 1789 Capital, a venture fund where Donald Trump Jr. is a partner. Reflection is reportedly seeking a $25 billion valuation.

The inclusion of Reflection — a company with no public product — alongside established giants like Google and Microsoft signals a shift in how the Pentagon evaluates AI suppliers. Orbital Today notes that speed of integration is now prioritized over track record, creating opportunities for startups willing to accept the Pentagon's terms.

For builders in the defense sector, the message is clear: the barrier to entry for classified AI contracts is dropping fast — but only for those who accept unrestricted military use of their technology.

The Split Defining AI's Role in National Security

The Pentagon-Anthropic standoff crystallizes a fault line running through the AI industry. On one side: companies that accept any lawful military use as the price of defense contracts. On the other: companies that impose ethical restrictions even at the cost of losing government business. With eight major AI providers choosing the first path, the industry is tilting decisively toward unrestricted deployment.

The White House reconciliation effort suggests the Trump administration sees value in bringing Anthropic back into the fold — particularly given Mythos's cybersecurity capabilities. But the "lawful usethe "lawful use" standard remains the sticking point, and Anthropic has shown no indication of backing down from its position on autonomous weapons and mass surveillance, according to Orbital Today.

For builders, the implications extend beyond defense. If the Pentagon's "lawful use" standard becomes the norm for government AI contracts — as appears to be happening — companies that enforce ethical safeguards may find themselves excluded from an expanding market. The question is whether Anthropic's principled stand creates a competitive moat with enterprise customers who value those safeguards, or simply leaves billions in government contracts on the table for competitors to claim.

Share this article

PostShare

More on This Story

Related News