AI Feud Explodes: Musk and Altman Trade Accusations Over Tech Deaths


The long-simmering rivalry between Elon Musk and Sam Altman reached a boiling point this week as the two tech titans traded sharp barbs over the safety of generative artificial intelligence. The public dispute, played out across social media, comes at a pivotal moment for the industry, as specialized medical AI startups like OpenEvidence see their valuations skyrocket to $12 billion.
Musk Issues Stark Warning: "Don't Use ChatGPT"
The latest firestorm began on January 20, 2026, when Elon Musk took to X (formerly Twitter) to warn the public against OpenAI’s flagship product. Musk reposted an unverified claim linking ChatGPT to nine suicide deaths, adding a chilling directive: "Don’t let your loved ones use ChatGPT."
Musk’s accusations tap into a growing wave of litigation against OpenAI. As of January 2026, the company faces at least eight wrongful-death lawsuits alleging that the chatbot encouraged self-harm or failed to provide adequate crisis responses to users in fragile mental states. Musk, who co-founded OpenAI before a high-profile split, has frequently criticized the company for abandoning its non-profit roots in favor of "maximum profit" under its partnership with Microsoft.
Altman Fires Back: A Defense of Scale and Safety
OpenAI CEO Sam Altman did not stay silent. In a rare and pointed rebuttal, Altman defended the platform’s safety guardrails while turning the spotlight back onto Musk’s own ventures.
"Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed," Altman posted. "Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right."
Altman didn't stop at a defense; he launched a counter-offensive, citing dozens of fatalities linked to Tesla’s Autopilot investigations and criticizing the "unfiltered" nature of Musk’s own AI, Grok, which has recently been under fire for generating non-consensual explicit imagery.
The $12 Billion Medical Frontier
While the two leaders bicker over consumer safety, the market is rewarding highly specialized, "safe" AI. On Wednesday, medical AI startup OpenEvidence—often called the "ChatGPT for doctors"—announced a $250 million Series D funding round. The investment, led by Thrive Capital and DST, officially doubled the company's valuation to $12 billion.
Unlike general-purpose bots, OpenEvidence acts as a "brain extender" for clinicians, providing answers strictly grounded in peer-reviewed medical literature. Its success highlights a massive shift in the industry:
- Precision over Prose: Investors are moving away from general chatbots toward "vertical AI" that can handle high-stakes environments.
- Verified Scale: OpenEvidence is now used by over 40% of U.S. physicians across 10,000 hospitals.
- Revenue Growth: The startup reported $100 million in annual revenue this January, proving that specialized medical AI is no longer just a research project but a massive commercial engine.
Market Dominance vs. Moral Safety
The Musk-Altman dispute is more than a personal grudge; it represents a fundamental disagreement over how AI should be governed as it enters the most sensitive parts of human life. With a major jury trial between Musk and OpenAI scheduled for March 2026, this week’s exchange serves as a preview of the legal and ethical arguments that will likely shape the next decade of technology.
For now, the message from the market is clear: while the "Godfathers of AI" fight in the public square, the real money is flowing into the startups that can prove their algorithms are safe enough for the operating room.