Business

Judge tells Elon Musk to cool robot apocaypse talk

A California judge stopped Elon Musk from using extinction-style “robot apocalypse” language during his civil trial over OpenAI donations.

Elon Musk’s “robot apocalypse” phrasing ran into a courtroom wall on Thursday, as a California judge sharply limited the Tesla CEO’s doomsday speculation during his civil trial.

The case, overseen by Judge Yvonne Gonzalez Rogers, centers on Musk’s dispute with OpenAI and its leadership.. During testimony that touched on Tesla’s robot-related business and Musk’s earlier comments about building a “robot army. ” the judge told him. “We’re not going to talk about extinction in this case.”

This matters because courts increasingly require that highly sensitive claims stay tethered to the legal issues at hand, rather than turning proceedings into a debate about worst-case scenarios.

Musk. under questioning from his lawyer Steven Molo. pushed back on the idea that Tesla’s robotics work is intended as weaponization.. He said Tesla does not make weapons and cautioned against a “Terminator” framing. referring to the movie’s portrayal of runaway machine violence.. When asked to clarify. Musk returned to the same theme. suggesting the most dangerous outcome would involve AI wiping out humanity.

Meanwhile, the trial itself has moved into additional days of testimony tied to Musk’s broader allegations about OpenAI’s founding mission and how it evolved over time. Musk’s lawsuit seeks to challenge what he says were departures from an approach focused on public benefit rather than private gain.

For business watchers, the exchange is a reminder that AI’s economic impact is being litigated alongside its cultural and political narrative. When rhetoric escalates, it can distract from the specific claims companies and executives are actually required to address.

Musk has long positioned himself as a prominent critic on AI risk, including raising concerns that safety must be prioritized.. Since he began taking the witness stand. he has repeatedly used vivid analogies about what could happen if AI advances beyond human control. describing his worry in terms of computers becoming significantly smarter than people.

The judge’s intervention also reflects how this trial is likely to proceed: even as AI safety remains a widely discussed topic in markets and technology circles, courtroom language may be expected to stay grounded in the facts relevant to the dispute.

In the end, that boundary-setting could shape not only what Musk says, but also how jurors interpret the testimony’s credibility and relevance to the underlying business conflict—especially in a case where trust, intent, and accountability are central themes.