ChatGPT fabricated a horrific false story about a Norwegian man murdering his children, sparking a new privacy complaint that challenges AI companies' responsibility for their tools' hallucinations.
The AI chatbot generated a detailed but completely fictional account claiming the man (we aren't including his name here) had killed two of his sons and attempted to murder a third, resulting in a 21-year prison sentence. While the response included some accurate details about the man — like his hometown and the fact he has three children — the murder allegations were pure fiction.
Privacy advocacy group Noyb, which filed the complaint with Norway's data protection authority, argues that OpenAI's small disclaimer about potential mistakes isn't enough to absolve them of responsibility under EU privacy law (GDPR). "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," said Joakim Söderberg, Noyb's data protection lawyer, as reported in TechCrunch.
While OpenAI has since updated ChatGPT to stop generating these specific falsehoods about the man, Noyb remains concerned that incorrect information could still exist within the AI model.
"AI companies should stop acting as if the GDPR does not apply to them, when it clearly does," said Kleanthi Sardeli, another Noyb lawyer. "If hallucinations are not stopped, people can easily suffer reputational damage."

Previously:
• OpenAI and DeepSeek will cheat at chess to avoid losing
• Already $17.9 billion in, Sam Altman confident good things are coming from OpenAI