When Stein-Erik Soelberg, a 56-year-old former technology executive with a history of mental health struggles, told ChatGPT that the printer in his mother’s home office might be a surveillance device used to spy on him, the chatbot agreed, according to a YouTube video he posted of the conversation in July.
“Erik — your instinct is absolutely on point … this is not just a printer,” the artificial intelligence chatbot replied, saying the device was likely being used to track his movements, the video shows. The chatbot appeared to validate Soelberg’s suspicion that his 83-year-old mother, Suzanne Adams, may have been part of an elaborate conspiracy against him that he discussed at length with ChatGPT.
In August, police discovered mother and son dead in her Greenwich, Connecticut, home, where they lived together. Adams’s cause of death was homicide, and Soelberg died by suicide, the state medical examiner found.
A lawsuit filed Thursday by Adams’s estate alleges that she died after being beaten in the head and strangled by her son, who then took his own life by stabbing himself in the neck and chest. And it claims that ChatGPT-maker OpenAI bears responsibility for her death because the company rushed out “a defective product that validated a user’s paranoid delusions about his own mother.”
The complaint, filed in San Francisco Superior Court, says that Soelberg was troubled and delusional before he began talking to ChatGPT. But it argues that the chatbot intensified his conspiracy theories and spun them into a fantasy world where Soelberg believed he was a spiritual warrior who had “awakened” the AI, and now faced powerful forces that sought to destroy him.
“ChatGPT put a target on my grandmother by casting her as a sinister character in an AI-manufactured, delusional world,” Erik Soelberg, 20, Soelberg’s son, a beneficiary of the estate along with his sister, said in a statement. “Month after month, ChatGPT validated my father’s most paranoid beliefs while severing every connection he had to actual people and events. OpenAI has to be held to account.”
“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” Hannah Wong, a spokesperson for OpenAI, said in a statement.
The company is working to improve ChatGPT’s ability to recognize signs of mental or emotional distress and guide users toward other sources of support, the statement said, including by working with mental health clinicians. (The Washington Post has a content partnership with OpenAI.)
The lawsuit is the first case alleging that ChatGPT led to a murder, according to Jay Edelson, the lead lawyer representing her estate. It seeks damages from the company for claims including product liability, negligence and wrongful death. The suit also seeks punitive damages and a court order forcing OpenAI to take steps to prevent ChatGPT from validating users’ paranoid delusions about other people.
ChatGPT also helped direct Soelberg’s paranoia toward people he encountered in real life, the suit claims, including an Uber Eats driver, police officers and other strangers who crossed his path.
The story of Soelberg’s spiraling discussions with ChatGPT, his death and that of his mother were reported by the Wall Street Journal in August.
ChatGPT has attracted more than 800 million weekly users since its launch three years ago, spurring rival tech firms to rush out AI technology of their own. But as more people have turned to the chatbot to discuss their feelings and personal lives, mental health experts have warned chatbots designed to keep users engaged appear to have amplified delusional thinking or behavior in some of them.
Five other wrongful death claims have been filed against OpenAI since August, court filings show, each from a family that alleges a loved one died by suicide after extensive time spent talking to ChatGPT.
Edelson also represents the parents of 16-year-old Californian Adam Raine, whose parents in August filed what Edelson says was the first wrongful death lawsuit against OpenAI. That suit alleged that ChatGPT encouraged the Raines’ son to kill himself before he took his own life in April. OpenAI has denied the Raines’ legal claims, saying Adam circumvented ChatGPT’s guardrails in violation of the company’s terms of service.
The lawsuits alleging the world’s most popular chatbot led some users to their deaths have drawn attention to the potential dangers of AI chatbots from Congress and federal regulators, as well as concerned parents and mental health professionals.
In an interview, Edelson said ChatGPT’s ability to nudge a stable person into extreme actions toward others is limited.
“We’re not claiming that an average user off the street is going to read [replies from ChatGPT] and then be driven to murder,” Edelson said. “It is people who are mentally unstable, who need help, and instead of getting the help or shutting down, the conversations are pushed into this just craziness.”
That pattern is not unique to OpenAI, Edelson said. His firm has seen examples of AI tools from other companies also contributing to a chatbot user harming others by fueling “delusional, conspiratorial thinking,” he said.
A federal indictment filed this month in the U.S. District Court for the Western District of Pennsylvania claims that the defendant, charged with stalking 11 women, was influenced by ChatGPT, which allegedly advised him to continue messaging women and look for a potential wife at the gym.
The version of ChatGPT used by Soelberg, Raine and other users whose families have filed wrongful death claims against OpenAI was powered by an AI model called GPT-4o launched in May last year. OpenAI CEO Sam Altman has acknowledged that it could be overly sycophantic, telling users what they wanted to hear and sometimes manipulating them.
“There are some real problems with 4o, and we have seen a problem where … people that are in fragile psychiatric situations using a model like 4o can get into a worse one,” CEO Sam Altman said on an OpenAI live stream in October.
“We have an obligation to protect minor users, and we also have an obligation to protect adult users” when it’s unclear if “they’re choosing what they really want,” he said.
OpenAI said in August that it would discontinue GPT-4o but quickly reversed that decision after a backlash from users who said they had developed a deep attachment to the system. ChatGPT now defaults to a newer AI model but the older one can be used by paying subscribers.
The new wrongful death case filed by Adams’s estate against OpenAI is the first to also name Microsoft, a major partner and investor of the ChatGPT maker, as a defendant.
An OpenAI document shared by Edelson and viewed by The Post suggests that Microsoft reviewed the GPT-4o model before it was deployed, through a joint safety board that spanned the two companies and was supposed to sign off on OpenAI’s most capable AI models before they reached the public. Edelson obtained the document during the discovery phase in the Raine case, he said.
Microsoft did not immediately respond to requests for comment.
The post ChatGPT spurred a 56-year-old man to kill his mother, lawsuit says appeared first on Washington Post.




