Austrian privacy group Noyb on Thursday filed a complaint against ChatGPT for making up information about individuals, including a false story about how one user would be a child murderer.
The popular artificial intelligence chatbot ChatGPT, like other chatbots, has a tendency to “hallucinate” and generate wrong information about people because it uses incorrect data or makes incorrect assumptions from its data.
In the case underpinning the complaint, a user called Arve Hjalmar Holmen asked the chatbot in August 2024 if it had any information about him, after which ChatGPT presented a false story that he murdered two of his children and attempted to murder his third son. The response contained correct facts like the number and gender of his children and the name of his home town.
“The fact that someone could read this output and believe it is true, is what scares me the most,” Hjalmar Holmen said in a statement shared by Noyb.
OpenAI has since updated ChatGPT to search for information on the internet when asked about individuals, meaning it would in theory no longer hallucinate about individuals, Noyb said. But it added that the incorrect information may still be part of the AI model’s dataset.
In its complaint filed with Norway’s data protection authority (Datatilsynet), it asked the authority to fine OpenAI and order it to delete the defamatory output and fine-tune its model to eliminate inaccurate results.
Noyb said that by knowingly allowing ChatGPT to produce defamatory results, OpenAI is violating the General Data Protection Regulation (GDPR)’s principle of data accuracy.
ChatGPT presents users with a disclaimer at the bottom of its main interface that says that the chatbot may produce false results. But Noyb data protection lawyer Joakim Söderberg said that “isn’t enough.”
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” he said. “The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth.”
The New York Times previously reported that “chatbots invent information at least 3 percent of the time — and as high as 27 percent.” Other news reports detail how ChatGPT has made up stories about people including allegations of sexual assault or bribery.
Noyb filed a separate complaint with Austria’s data protection authority last year over the fact that ChatGPT made up founder Max Schrems’ birthday.
Europe’s data protection authorities formed a ChatGPT task force in 2023 to coordinate privacy-related enforcement actions against the platform, which was widened to a more general AI task force earlier this year.
OpenAI did not respond to a request for comment in time for publication.
The post ChatGPT hit with complaint for calling user a child murderer appeared first on Politico.