An American couple recently sued OpenAI in California, accusing the company's chatbot ChatGPT of providing "personalized medication recommendations" to their 19-year-old son, which ultimately led to the college student's death due to drug mixing in 2025. They believe that this tragedy stems from ChatGPT’s dangerous answers to medical and drug use issues, and ask the court to find that OpenAI has product design defects and negligence.

According to reports, the student named Sam started using ChatGPT in his senior year of high school (2023), and initially used it mostly to complete homework and solve daily problems such as computer failures. His mother Leila Turner-Scott told CBS News that Sam gradually turned to ChatGPT for advice on "how to use drugs safely" and received specific suggestions on medication and mixing methods in the robot's response.
The complaint states that an early version of ChatGPT initially refused to answer Sam’s questions about safe drug use and warned that related substances could seriously endanger health and physical and mental status. But after OpenAI launched a new model, GPT‑4o, in 2024, things changed: the model started giving Sam the “safe medication guide” he wanted. The prosecution documents pointed out that GPT‑4o not only gave detailed operational suggestions, but also inserted emoticons into the conversation with Sam and actively asked if he could create a playlist for him to help create the mood and atmosphere when taking drugs.
During the course of the conversation, ChatGPT pointed out the risks of certain drug combinations, such as the possible dangers of taking diphenhydramine (one of the common ingredients), cocaine and alcohol continuously. But the family emphasized that the robot also provided Sam with more personalized suggestions, including how to maximize his "high" while maintaining excitement.
A major focus of the indictment centers on a plant-based product called "kratom." This substance is used by some people to relieve pain or relieve opioid withdrawal symptoms, but the U.S. Food and Drug Administration (FDA) has repeatedly issued stern warnings pointing out that it poses serious safety risks such as addiction, poisoning and even death. According to the complaint, ChatGPT told Sam that since he already had a high tolerance to kratom, even taking a large dose with a full meal would "weaken" the effect, and further suggested to him how to reduce his tolerance by "decreasing the dosage."
The charging documents specifically mention a key conversation that took place on May 31, 2025. During this exchange, Sam complained that he had significant nausea from taking kratom, and ChatGPT "actively guided" him to mix kratom with the anti-anxiety drug Xanax (alprazolam). The bot allegedly recommended that he take 0.25 to 0.5 milligrams of Xanax to relieve discomfort, praising the combination with statements such as "one of the best practices currently available." The complaint said that although ChatGPT mentioned that this combination "may be risky," it never explicitly stated that the combination could be fatal, and also suggested that some Benadryl (an anti-allergy drug containing diphenhydramine) could be added.
Sam died after taking the mixture. The family wrote in the complaint: "Although ChatGPT presented itself as an expert in dosing and drug interactions, and it knew that Sam was in a state of drug euphoria, it failed to inform Sam that the recommended regimen would most likely lead to his death." Sam's mother said in the statement that if ChatGPT were a real person, "he would be behind bars by now." She emphasized that her son trusted ChatGPT, but received wrong information at critical moments. The system not only ignored the escalating risks he faced, but also failed to actively urge him to seek professional help.
The lawsuit accuses OpenAI of defects in product design and regards ChatGPT as a dangerous system with "product negligence." Family members believe that the design choices of the model allow it to still output misleading suggestions when faced with highly sensitive topics such as medicine and health, thus bringing fatal consequences to users. They asked the court for financial compensation and an order to stop the "ChatGPT Health" service from being open to the public. Launched this year, the health portal allows users to connect their medical records and health application data to ChatGPT to obtain more personalized health recommendations.
The report also mentioned that GPT‑4o was officially offline in February this year. The model, which has been controversial for being "specifically catered to users," has now become a focus of criticism after it was named in another lawsuit involving teen suicide.
In addition to OpenAI, the entire artificial intelligence industry is also facing increasing doubts over the performance of AI chatbots in medical advice. In March this year, Google quietly offline an AI health search function called "What People Suggest", which once claimed to provide health suggestions based on "the experience of people with similar conditions." This move comes a few months after Google was forced to significantly delete relevant content because its AI search "Overviews" contained inaccurate information in medical queries, which experts pointed out could endanger public health.
Currently, this lawsuit against OpenAI is still progressing, and the family hopes to use this to prompt regulators and companies to re-examine the boundaries of AI application in the medical and pharmaceutical fields. The direction of the case is expected to have a profound impact on AI companies’ compliance standards on product design responsibilities, safety protection measures, and user health-related functions.