Skip to main content

Elon Musk Admitted in Court: xAI Trained Grok on OpenAI Models

AI article illustration for ai-jarvis.eu
Elon Musk on Thursday, April 30, 2026, admitted before a federal court in California that his artificial intelligence Grok had been trained with the help of competing OpenAI models. The admission came in the midst of one of the most closely watched lawsuits in the history of the tech industry and immediately sparked a wave of questions about hypocrisy, ethics, and the legal boundaries of so-called AI model distillation.

Musk in Court: "Partly True"

During the third day of his testimony in the case Musk v. Altman, the founder of Tesla and SpaceX came under pressure from OpenAI's attorneys. When asked whether his startup xAI used OpenAI models to train its own chatbot Grok, Musk initially dodged the answer by referring to the fact that "all AI companies" do something similar. Upon insistence, however, he admitted: "Partly" (partly). According to The Verge, he subsequently added that it is "standard practice to use other AI to validate one's own AI."

This admission is particularly piquant in the context of the entire dispute. After all, Musk is suing OpenAI for allegedly deviating from its original non-profit mission and prioritizing profit. He himself is now admitting that his competing firm used precisely the technology of this "profaned" OpenAI for its own development.

What Is Model Distillation and Why Is It So Controversial?

Model distillation is a technique in which a large and powerful language model—the so-called teacher—passes on its knowledge to a smaller model, the so-called student. The goal is to create a cheaper and faster variant that retains most of the capabilities of the original system. This process is in itself common and legal: for example, both Anthropic and OpenAI use it internally to shrink their own models for commercial deployment.

The problem arises when smaller labs distill foreign models without permission. In such a case, it is a way to gain top-tier capabilities for a fraction of the time and costs invested in the original development. Anthropic warned in its February 2026 blog post that illegal distillation represents "a method of intellectual property theft that violates terms of service."

Anthropic recently revealed extensive campaigns by Chinese laboratories DeepSeek, Moonshot AI, and MiniMax, which allegedly illegally extracted capabilities from its Claude model. According to published data, it involved millions of queries through tens of thousands of fraudulent accounts. Google has introduced similar security measures against what it designates as "distillation attacks."

Hypocrisy or Standard Practice?

Musk's admission raises a fundamental question: can someone who publicly criticizes a competitor for commercialization and alleged betrayal of principles simultaneously use its technology legally and morally for their own purposes? OpenAI stated in response to the entire dispute that Musk's lawsuit is "a baseless and envious attempt to destroy a competitor" that is meant to advance his own business interests including xAI and X.

The lawsuit began in 2024, when Musk accused OpenAI, Sam Altman, and Greg Brockman of breaching fiduciary duty and fraud. He is demanding the removal of management and damages of up to $150 billion. At the same time, he is demanding that OpenAI cease operating as a public benefit corporation. The testimony about model distillation, however, may weaken his position—if he himself considers using competitor outputs to be standard practice, he can hardly argue that OpenAI acted immorally when it accepted investments from Microsoft and became a commercial company.

What Does This Mean for Czech Users and Companies?

While the courtroom drama is unfolding in the USA, the impacts are global. Czech companies and ordinary users face the reality that the market for large language models is extremely concentrated. OpenAI, Google, Anthropic, and xAI control top-tier systems on which European applications and services are increasingly dependent. If it is confirmed that even Musk's xAI—presented as an alternative "for free thinking"—is built on the foundations of competitor OpenAI, it undermines trust in marketing slogans about independence.

From a legal perspective, distillation remains in a gray zone. The EU AI Act does emphasize transparency and copyright protection in model training, but precise rules for model distillation between competitors are not yet clearly defined. For Czech developers and startups, this means risk: if they want to use outputs from foreign models to train their own solutions, they must very carefully monitor API licensing terms to avoid getting into conflict with providers' terms of service.

As for availability, Grok is currently accessible primarily through the X platform (formerly Twitter) and selected premium subscriptions. For the Czech market, there is no full localization into Czech and official support for Czech users is limited. The basic version is available within the X Premium subscription, while advanced features require more expensive plans.

Distillation as a Weapon in the AI Arms Race

The controversy around model distillation is not limited to the dispute between two billionaires. It is a broader geopolitical problem. American laboratories have invested billions of dollars in developing models with advanced safety guardrails. If competitors—whether in China or elsewhere—gain these capabilities through distillation, they may deploy them without the original restrictions. Anthropic warns that distilled models may lack "essential safety measures," creating risks in the areas of cybersecurity, biological weapons, or disinformation campaigns.

For the average user, it is crucial to realize that the growth of competition in the AI field is not always the result of breakthrough inventions, but sometimes just sophisticated replaying of data from one model into another. This does not mean that the resulting product cannot be useful, but we should be more cautious when evaluating marketing claims about "independence" or "revolutionary" architecture.

What Happens Next?

The court case between Musk and Altman continues and its outcome may affect the future structure of OpenAI and the entire AI industry. Whether Musk's admission about distillation weakens his own case, or on the contrary supports the argument that there are no clean hands in the AI industry, will be shown by the coming days of the trial. One thing is certain: transparency around training data and methodologies is becoming one of the most important topics not only for lawyers, but also for the public.

Is model distillation legal?

Model distillation is in itself a legal technique if used on one's own data or with the explicit consent of the model owner. The problem arises when competitors extract outputs from someone else's model in violation of terms of service—then it may constitute a breach of intellectual property.

Can a Czech startup legally learn from ChatGPT?

Using outputs from ChatGPT or other commercial models to train one's own product without permission is risky. OpenAI, Anthropic, and Google all have clear restrictions in their terms. For safe use, it is advisable to opt for open models with permissive licenses, such as Meta's Llama or Alibaba's Qwen.

What is the difference between Grok and ChatGPT?

Grok is integrated primarily into the social network X and emphasizes "less censorship" and access to current information from this platform. ChatGPT from OpenAI offers a broader ecosystem of tools, better Czech language support, and more advanced integrations into office applications. However, both models are increasingly interconnected—as Musk himself has now admitted.

Perex: Elon Musk admitted before a federal court that his AI startup xAI used OpenAI models to train the chatbot Grok. The admission raises questions about ethics, law, and hypocrisy in the battle of artificial intelligence giants.

Title: Elon Musk Admitted in Court: xAI Trained Grok on OpenAI Models

X

Don't miss out!

Subscribe for the latest news and updates.