Elon Musk cites Obama and Page talks in AI safety trial

AI safety – Elon Musk told a federal jury he warned Barack Obama about AI risks and clashed with Larry Page—part of a lawsuit seeking damages and changes to OpenAI’s structure.
Elon Musk tried to make his case in front of a federal jury by leaning on credibility-by-association: meetings and conversations with widely recognized power figures.
During testimony in Oakland. Musk framed his motivation for pushing AI safety as something he treated as urgent years ago—before today’s AI boom made risk discussions mainstream.. He said that in 2015 he met one-on-one with former President Barack Obama and spent “an hour” warning him about dangers from AI. which Musk claimed were not yet being treated seriously.. Musk also testified that Larry Page. then CEO of Google. called him a “speciest” after Musk advocated for being “pro-humanity” rather than prioritizing AI at any cost.
Why Musk’s “safety” story is central to the trial
The legal fight has a simple headline but complicated mechanics: Musk says the founders—including Altman—abandoned an original nonprofit mission aimed at serving the public. not private gain.. He testified that he was deceived into donating $38 million in 2015 when the company was being co-founded with the nonprofit intent.. In parallel. the lawsuit argues OpenAI effectively became tied to major commercial interests. describing it as turning into something like a “subsidiary of Microsoft. ” after later restructuring.
The business stakes: governance, charity, and precedent
That matters because the structure of AI companies has become a core business issue—not only a technical one.. OpenAI’s restructuring moved it toward a more conventional for-profit setup. with Microsoft holding roughly a 27% stake in the for-profit entity.. Musk is arguing that this change is not just evolution but a deviation from the founding bargain.. From a market perspective. the case touches how the AI economy finances research. attracts capital. and distributes control—especially when early mission promises collide with scaled commercialization.
Musk vs.. OpenAI: a fight over control and who agreed to what
OpenAI’s response, including statements referenced in the proceedings, positions Musk’s lawsuit as a competitor derailment.. OpenAI has called it “a baseless and jealous bid to derail a competitor. ” and it maintains that an agreed shift to a for-profit model occurred in 2017.. OpenAI also argues Musk demanded full control and walked away when he didn’t get it.
Meanwhile, Musk’s current requests extend beyond damages.. He wants the leadership of Altman and Brockman stripped. and he has asked for the return of what he describes as “ill-gotten gains” from OpenAI’s for-profit operations.. Over roughly three weeks, a nine-person jury will weigh claims including breach of contract and unjust enrichment.. If the jury finds liability, the judge will determine how accountability should work.
The broader industry signal: AI risk talk is now also risk talk for investors
For employees. partners. and investors. the business impact is straightforward: governance disputes can change how quickly companies move. how openly they share details. and how much control stakeholders expect to retain.. Even if the parties stay locked in a legal battle. the case feeds into a larger market question—whether AI institutions will be judged less on product speed and more on the credibility of their founding promises.
There’s also a competitive undertone.. Musk left OpenAI in 2018 and later launched xAI in 2023, which he says is another path for AI development.. SpaceX acquired xAI in February, and SpaceX is preparing for an IPO that could value it at more than $2 trillion.. OpenAI, meanwhile, is reportedly considering an IPO at a $1 trillion valuation.. Those parallel moves show how quickly the AI industry is translating debates over mission and control into hard financial outcomes.
What happens next for the market—and for AI governance
For now. the jury’s work will focus on the specifics of what was promised. what was agreed. and whether any party crossed legal lines.. But the context—AI’s rapid commercialization. investor pressure. and the ongoing debate over safety and control—means the outcome is likely to echo well beyond this courtroom.