Updated 2 hours ago
600 Google Employees Demand Pichai Reject Classified Pentagon AI Deal

Google Pentagon AI Protest

600 Google Employees Demand Pichai Reject Classified Pentagon AI Deal

More than 600 Google employees — including DeepMind staff and VPs — signed a letter demanding CEO Sundar Pichai refuse classified Pentagon AI work, directly referencing the Anthropic precedent where the DoD dropped Claude for requesting similar restrictions.

The Employee Letter

More than 600 Google employees — including members of the DeepMind lab, principals, directors, and vice presidents — signed a letter to CEO Sundar Pichai on Monday demanding Google refuse to allow its AI tools to be used by the Pentagon for classified work. The letter, reported by The Washington Post, warns that making the wrong call "would cause irreparable damage to Google's reputation, business and role in the world."

As Gizmodo reported, the employees wrote: "As people working on AI, we know that these systems can centralize power and that they do make mistakes. We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses." They argue the only way to guarantee the technology isn't misused is to "reject any classified workloads" — because under classified programs, misuse could occur "without our knowledge or the power to stop it," per Gizmodo.

The Anthropic Precedent That Hangs Over Everything

The letter doesn't exist in a vacuum. Two months ago, Anthropic was dropped by the Defense Department specifically because it requested restrictions on how the Pentagon could use Claude. According to Gizmodo, the Pentagon demanded Anthropic allow Claude to be used for "all lawful purposes" — including domestic surveillance and fully autonomous weapons. When Anthropic refused, the government labeled the company a "supply chain risk," per Gizmodo.

This is the direct precedent Google employees are invoking: if Anthropic lost its Defense Department contract for asking the exact same question Google's workforce is now asking, what happens if Google says yes? As Business Insider notes, the employees are essentially demanding that Google not fill the gap Anthropic left — and not profit from a competitor taking the ethical hit.

Google's Contradictory Position

The situation exposes a stark contradiction. Both Google and OpenAI filed amicus briefs supporting Anthropic in its lawsuit against the government over the Pentagon dispute, as SiliconANGLE reports. Yet both companies are now pursuing lucrative Pentagon contracts while letting Anthropic bear the brunt of actually fighting the government. Google is reportedly in ongoing negotiations with the DoD regarding its Gemini AI models being used in classified settings.

The Pentagon requires AI partners to allow models for "all lawful purposes," per SiliconANGLE — the same clause that triggered the Anthropic dispute. Google's employees are essentially calling out the hypocrisy: you can't support Anthropic's red lines in court while agreeing to erase those same red lines for a paycheck.

OpenAI's Revised Pentagon Contract

OpenAI, which also faced internal pressure from its own employees, revised its Pentagon contract with new language to deter use for mass surveillance. Per SiliconANGLE, the revised clause states the AI will not be used for "deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." This is a compromise — not the blanket rejection Anthropic sought, but a specific carve‑out that addresses the most politically sensitive use cases.

Whether Google follows OpenAI's compromise path or Anthropic's hard‑line path remains to be seen. Business Insider reports that the employee letter pushes for the latter, but Google's leadership may calculate that specific restrictions are more realistic than a blanket refusal — especially when competitors are already filling the void.

Google's Evolving AI Principles

The current dispute is a direct echo of Google's 2018 internal revolt over Project Maven, when employee protests led the company to pledge not to create technologies that "cause or are likely to cause harm" and not to create technologies "principally designed as weapons," per SiliconANGLE. But SiliconANGLE also notes that Google quietly changed its stance last year — that language has now "evolved" (softened or removed). The 2026 version of Google's AI principles apparently leaves more room for military applications than the 2018 version did.

This erosion of ethical commitments is exactly what the 600 signatories are pushing back against. The letter explicitly warns about "lethal autonomous weapons and mass surveillance," per Gizmodo — the same categories Google pledged to avoid eight years ago.

What This Means for Builders

If you're building on Google's AI platform — Gemini, Vertex AI, or any of Google's cloud AI services — the outcome of this internal dispute matters. A Google that accepts the all‑lawful‑purposes clause described by SiliconANGLE opens the door to your infrastructure being part of military workflows, even indirectly. A Google that refuses classified work keeps a clearer line between commercial and military AI, but risks losing DoD contracts that could fund further model development.

The broader lesson from both the Anthropic and Google situations, as The Washington Post has documented: the all‑lawful‑purposes clause is the new fault line in AI policy, per SiliconANGLE. Any builder choosing an AI provider for a product that might touch government contracts needs to understand where that provider stands. Anthropic drew a line and got punished. OpenAI drew a narrower line and found a compromise. Google's employees want a line. What Google's leadership decides will shape the competitive landscape for every AI tool and platform in 2026.

Share this article

PostShare

More on This Story