Elon Musk Seemingly Admits xAI Has Used OpenAI’s Models to Train Its Own

While answering questions under oath, Musk argued it’s standard practice for AI labs to use their competitors’ models.​In a recent federal court hearing, Elon Musk, the visionary behind SpaceX and Tesla, hinted that his artificial intelligence (AI) lab, xAI, might have utilized OpenAI’s models for its training. Musk was on the witness stand, responding to cross-examination queries from an OpenAI attorney, as part of his ongoing legal tussle with the creator of ChatGPT.

The exchange, as accurately reported by WIRED, went as follows:

OpenAI Lawyer William Savitt: “Do you know what distillation is?”

Musk: “It means to use one AI model to train another AI model.”

Savitt: “Has xAI done that with OpenAI?”

Musk: “Generally all the AI companies [do that].”

Savitt: “So that’s a yes.”

Musk: “Partly.”

Distillation is a process where a smaller AI model is trained to replicate the behavior of a larger, more sophisticated model. This makes it more cost-effective and quicker to operate while maintaining a significant portion of its performance.

Savitt then probed whether OpenAI’s technology had been employed in any manner to develop xAI.

Savitt: “Has OpenAI technology been used in any way to develop xAI?”

Musk: “It is standard practice to use other AIs to validate your AI.”

Neither OpenAI nor xAI responded immediately to WIRED’s request for comment.

OpenAI has been striving to prevent its rivals from distilling its AI models, specifically targeting the Chinese AI lab, DeepSeek. In a memo to a House committee in February 2026, OpenAI stated that it had taken measures to safeguard its models against distillation. The organization emphasized its commitment to ensuring a level playing field where “China can’t advance autocratic AI by appropriating and repackaging American innovation.”

The Trump administration also took action to prevent Chinese firms from distilling American AI models. Michael Kratsios, the director of the White House’s office of science and technology policy, announced in an April 2026 memo that it would share information about foreign distillation with US AI companies. Kratsios affirmed the US government’s commitment to the free and fair development of AI technologies across a competitive ecosystem in a post on X.

American AI labs have utilized each other’s AI models in various ways, such as testing progress and assessing safety. However, in the current competitive landscape, some AI companies have completely severed ties with rival labs. In August 2025, Anthropic blocked OpenAI’s access to its Claude coding models after alleging a violation of its terms of service. More recently, Anthropic also cut off xAI from using its AI models for coding.

During his multi-day cross-examination of Musk, Savitt questioned Musk about his attempts to gain control of OpenAI and his subsequent endeavor to outdo the ChatGPT-maker. On Wednesday, Savitt presented emails and texts from 2017 to support a line of questions about whether Musk pressured OpenAI by withholding funding and poaching key researchers.

This ongoing legal battle and the revelations it brings to light underscore the intense competition and high stakes in the world of AI. As companies strive to outdo each other, the question of ethical practices and fair competition becomes increasingly important. The outcome of this case could set a precedent for future disputes in the rapidly evolving AI industry. 

Leave a Reply

Your email address will not be published. Required fields are marked *