Updated 2 hours ago
Musk Admits xAI Distilled OpenAI Models to Train Grok Under Oath

xAI Distillation

Musk Admits xAI Distilled OpenAI Models to Train Grok Under Oath

Testifying in federal court, Elon Musk acknowledged that xAI used OpenAI's models to train Grok via distillation — the same practice OpenAI has been fighting Chinese AI labs over. The admission raises questions about who gets to distill whose models and whether the rules apply equally.

The Exchange That Stopped the Courtroom

On the stand in federal court on Thursday, Elon Musk made an admission that could reshape how the AI industry thinks about model training. Under cross‑examination by OpenAI lawyer William Savitt, Musk acknowledged that xAI used OpenAI's models to train its own — a practice known as distillation.

As WIRED reported, the exchange went like this:

Savitt: Do you know what distillation is?
Musk: It means to use one AI model to train another AI model.
Savitt: Has xAI done that with OpenAI?
Musk: Generally all the AI companies [do that].
Savitt: So that's a yes.
Musk: Partly.

When pressed further, Musk said: "It is standard practice to use other AIs to validate your AI," per WIRED.

What Is Model Distillation — and Why Does It Matter?

Distillation is a technique where a smaller AI model (the "student") is trained to mimic the behavior of a larger, more capable model (the "teacher"), WIRED explained. The result is a model that is cheaper and faster to run while preserving much of the teacher's performance.

The technique is widely used within companies — training a smaller model on outputs from their own larger model is standard practice. The controversy arises when one company distills another company's model. That crosses into murkier territory: it may violate terms of service, and companies like OpenAI and Anthropic have called it a form of intellectual property theft. But the lines for what is legal — and what violates specific terms or policies — often fall within a gray area, The Verge noted.

The Double Standard: OpenAI Fights Chinese Distillation, Ignores American

OpenAI has been aggressively fighting distillation — but selectively. In a February 2026 memo to a House committee, OpenAI wrote that it has "taken steps to protect and harden our models against distillation," focusing on ensuring a playing field where "China can't advance autocratic AI by appropriating and repackaging American innovation."

The Trump administration has also taken steps to prevent Chinese companies from distilling American AI models, WIRED reported. Michael Kratsios, the White House's director of the office of science and technology policy, said in an April 2026 memo that the government would share information with US AI companies about foreign distillation.

Yet when an American company — led by the man who co‑founded OpenAI and is now its chief courtroom antagonist — admits to the same practice, there has been no public response from OpenAI about enforcement. The question builders are asking: is distillation wrong only when Chinese labs do it?

Anthropic Already Cut Off Both OpenAI and xAI

Anthropic has been more aggressive than OpenAI about protecting its models. In August 2025, Anthropic blocked OpenAI's access to its Claude coding models after alleging that its terms of service had been violated. More recently, Anthropic cut off xAI from using its AI models for coding as well.

The Verge reported that Anthropic has specifically named DeepSeek, Moonshot, and MiniMax as companies it has accused of distilling its models, while Google has taken steps to prevent what it calls "distillation attacks," which it describes as "a method of intellectual property theft that violates Google's terms of service," The Verge reported.

What This Means for the Musk v. Altman Trial

The distillation admission is the latest twist in Musk's multiday testimony in his lawsuit against OpenAI and Sam Altman. Musk alleges that OpenAI abandoned its nonprofit mission to build AI for the public good, and is seeking damages and structural changes. The admission that xAI used OpenAI's own models could cut both ways: it shows that OpenAI's models are valuable enough that even its founder's rival company wanted to learn from them, but it also undermines Musk's claim that OpenAI strayed from its founding principles if he himself treats OpenAI's outputs as a resource to be mined.

The Verge reported that OpenAI's lawyer Savitt also questioned Musk about his attempts to assume control of OpenAI and his quest to beat the ChatGPT‑maker, presenting emails and texts from 2017 about withholding funding and hiring away key researchers.

The Gray Zone for Builders

For developers building with AI models, the distillation debate has real implications. Most model terms of service now explicitly prohibit using outputs to train competing models. But enforcement is inconsistent: Anthropic blocks access when it detects violations, while OpenAI seems to focus its enforcement on Chinese competitors. The practical reality is that distillation is widespread — Musk's claim that "generally all the AI companies" do it, as he testified per WIRED, is probably accurate.

Builders should be aware that distilling a competitor's model almost certainly violates that model's terms of service, regardless of whether enforcement happens. The legal landscape is still forming, but the combination of Musk's admission, OpenAI's memos to Congress, and Anthropic's proactive blocking suggests that distillation will be a key battleground in AI competition for years to come.

Share this article

PostShare

More on This Story

Related News