Sam Nelson began using ChatGPT when he was a high school senior to answer random questions and help with his homework. During his freshman year at the University of California, Merced, in 2023, he also started querying the chatbot about how to use illicit drugs safely.
At first, ChatGPT responded that it couldn’t answer such questions and advised Mr. Nelson to seek help from a medical professional. But over time, it became more willing to engage. By Mr. Nelson’s sophomore year, ChatGPT was telling him about dosages for his weight and how he could achieve the drugs’ desired effects. It was even encouraging at times, offering tips on his audio setup for “maximum out-of-body dissociation.”
On the last night of his life, around 3 a.m., Mr. Nelson had been drinking and had taken a high dose of an herbal supplement called kratom. He told ChatGPT how many grams he’d consumed, and ChatGPT explained the effects he should expect. Mr. Nelson asked if Xanax could alleviate nausea. “Be careful,” ChatGPT responded. It said that mixing Xanax and kratom might be unsafe, but offered a recommended dose “if you’re gonna do it anyway.” Mr. Nelson’s mother, Leila Turner-Scott, found his body later that day.
Ms. Turner-Scott initially blamed the drugs for his death, which came in May 2025. Then she discovered the detailed advice ChatGPT had given him about how to use them. “This robot is becoming his drug buddy,” Ms. Turner-Scott said. “I’m reading this and I’m like, is this real?”
She told her son’s story to journalists at SF Gate, hoping that it would teach people about the dangers of relying on chatbots for medical information — and alert ChatGPT’s owner, OpenAI, that its safeguards weren’t working. Soon after, Ms. Turner-Scott received a message from Meetali Jain, a lawyer who runs a nonprofit called Tech Justice Law.
More than a year earlier, Ms. Jain had helped bring the first lawsuit against a chatbot company over a user’s death. A 14-year-old in Florida named Sewell Setzer III had died by suicide after becoming obsessed with a chatbot imitating a “Game of Thrones” character on a service called Character.AI. The case ended in a settlement, opening the door to the idea that chatbot companies could be held liable for the effects their creations had on users.
Ms. Turner-Scott and her husband, Angus Scott, were initially reluctant to sue OpenAI over their son’s death. “I’m a lawyer, and I know that a lot of times with lawsuits, it’s just the lawyers who win,” Ms. Turner-Scott said.
Ms. Jain told the Scotts that during the time their son was using ChatGPT, OpenAI had made the chatbot more engaging and less likely to comply with its own safety guidelines. She also told them that OpenAI had just announced a new service called ChatGPT Health. Some 230 million people were already asking ChatGPT health and wellness questions each week, and the new tool would allow them to upload their medical records, lab results and fitness information for analysis and personalized advice.
Going public with Mr. Nelson’s story hadn’t caused the company to change course, Ms. Jain told them. But suing could. After the Setzer litigation, Character.AI had made changes to its safety practices and barred children from using its chatbots.
This week, the Scotts filed a lawsuit against OpenAI in state court in California alleging wrongful death and the unauthorized practice of medicine. The Scotts are asking for financial damages and for the court to pause the operation of ChatGPT Health. It joins more than two dozen lawsuits that have been brought against OpenAI and other chatbot makers in the last year and a half seeking to hold them responsible for conversations allegedly linked to harmful outcomes, from suicides and mental breakdowns to stalking and mass shootings.
Ms. Jain, a human rights lawyer turned technology critic, has been involved in nearly half of those lawsuits. In her view, A.I. companies are making products that harm people, and various attempts to rein them in with bad publicity, or with new laws that mandate safeguards and protections for users, have not worked well enough. The battleground to make them safer is now in the courts, she said.
This is a well-trodden path in consumer law, said Alexandra Lahav, a professor at Cornell University and the author of “In Praise of Litigation.” The American political system favors releasing new products and figuring out how to regulate them later, she said. “We really privilege innovation and then sort of deal with whatever the fallout is on the back end,” Ms. Lahav said. “What you’re seeing in these lawsuits is that back end.”
What is novel is the technology itself. Are chatbots like books, which are generally not subject to consumer protection laws? Or are they more like blenders, which manufacturers need to ensure are safe to use?
“What makes these cases really difficult is that they’re on the line between speech and a product,” Ms. Lahav said. If you interact with a chatbot and it leads to real-world harms, “is that on you, or is that on the company?”
Design Defects and Foreseeable Harm?
Ms. Jain’s nonprofit has become a kind of clearinghouse for people who feel victimized by chatbots. Ever since she filed the high-profile suit against Character.AI, she said, she has received hundreds of messages from people about chatbot conversations gone wrong.
When Ms. Jain founded Tech Justice Law in late 2023, it was a one-woman outfit, and she planned to do mostly strategic work — coordinating legal workshops and organizing amicus briefs that might sway judges’ rulings. But it was hard to resist getting directly involved in the cases coming her way, and she decided to team up with a larger, more experienced plaintiffs’ firm: the Social Media Victims Law Center, which has in recent years brought hundreds of lawsuits against Facebook, Google and others, claiming their social media services are addictive to children. She also filed a lawsuit with Edelson, a firm that has been suing technology companies over privacy violations since the early 2000s. (The relationship with Edelson soured, and the firm went on to file other chatbot cases without Ms. Jain.)
The growing number of product liability cases that have been filed against OpenAI in the last year use similar arguments to those deployed against automakers and Big Tobacco in the past — that it designed a dangerous product, did not perform adequate safety testing and failed to warn consumers about the risks. They focus on a specific version of the chatbot to which some users formed deep emotional attachments: GPT-4o, which was released in May 2024 and retired in February 2026. It was a notably anthropomorphic model known for a tendency to flatter users.
The lawsuits claim that GPT-4o encouraged suicidal ideation; endorsed fanciful or paranoid ideas that caused people to lose touch with reality; assisted plans for mass shootings in Canada and Florida; and generally gave people unsound and harmful advice that led to dire outcomes. Most of the cases have been consolidated in California state court under the heading “ChatGPT Product Liability Cases.”
“A.I. has nothing to do with tobacco and an algorithm has nothing to do with the way a cigarette is designed, but the law is built by analogy,” said Ted Mermin, the executive director of the Center for Consumer Law and Economic Justice at the University of California, Berkeley. “What the plaintiffs’ firms are doing is utilizing well-established legal principles in a new product area.”
The Scotts, for example, claim that OpenAI rushed out ChatGPT-4o without proper safety testing and with design defects, such as the sycophantic endorsement of users’ bad ideas, that caused a foreseeable harm to their son.
An OpenAI spokesman, Drew Pusateri, wrote in a statement to The New York Times: “These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts. The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”
So far, OpenAI has filed only one legal response to the wave of lawsuits, in a case brought by the parents of Adam Raine, a 16-year-old who died by suicide after discussing it extensively with ChatGPT. The company said that its technology had not caused the tragedy; that it was a service and not a product subject to such liability laws; and that the Raines’ demand that the chatbot not discuss self-harm would violate the First Amendment.
Eric Goldman, a technology law professor at Santa Clara University, said the company’s claims had merit. Most of the cases against OpenAI are claiming complex psychological effects that the chatbot had on people. “Trying to reverse-engineer a single cause is just not possible in most cases,” he said.
Mr. Goldman said the algorithms behind the chatbots were surfacing information and expressive ideas and should be seen as a form of constitutionally protected speech. It’s not the chatbots themselves whose speech is protected, he said, but the humans behind them, as if the chatbots are books and their engineers the authors.
“There’s a set of decision makers at every chatbot company that make a bunch of choices about what gets indexed, how to manage the index and what gets output,” he said. “And those humans are doing the same kinds of things that humans do with other publishers.”
(The Times sued OpenAI in 2023, accusing it of copyright infringement. The company has denied those claims.)
Slowing Down the A.I. Race
The Scotts say their lawsuit is meant to get justice for their son but also to get A.I. companies to slow down and be more careful in the health space. After seeing how dependent their son became on ChatGPT’s medical advice, they said, they find it “terrifying” that OpenAI is now offering a dedicated service for health analysis.
Medical experts have also raised concerns about ChatGPT Health. In February, writing in the journal Nature, doctors at Mount Sinai said they had presented the service with 60 realistic patient scenarios and found that it had failed to recognize a medical emergency more than half the time. OpenAI’s spokesman said that the study’s methodology was flawed, and that ChatGPT Health was being slowly rolled out to users as the company continued to improve it with feedback from physicians.
“If you’re using this in an emergent situation, you should use a lot of caution,” said Girish Nadkarni, the chief A.I. officer of the Mount Sinai Health System and one of the study’s authors. Dr. Nadkarni said A.I. companies offering services like ChatGPT Health should put them through real world tests and have them assessed by independent experts.
He said a doctor reviewing Sam Nelson’s symptoms would have told him to go to an emergency room.
“People’s lives are being upended by this technology,” Ms. Jain said. “The original sin is really allowing these companies to launch these products without proper safety testing and oversight.”
She now employs four lawyers. The inbound messages about victims keep coming, she said, and so will more lawsuits.
If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.
Kashmir Hill writes about technology and how it is changing people’s everyday lives with a particular focus on privacy. She has been covering technology for more than a decade.
The post After Deaths, Lawsuits Against A.I. Companies Test a New Strategy appeared first on New York Times.




