A Norwegian man has lodged a formal complaint after ChatGPT falsely claimed he had murdered his two sons and was sentenced to 21 years in prison. Arve Hjalmar Holmen has approached the Norwegian Data Protection Authority, demanding that OpenAI, the chatbot’s developer, be fined for the serious misinformation.
The case is the latest example of AI “hallucinations,” where artificial intelligence systems generate and present false information as factual. Mr. Holmen has expressed deep concern over the potential impact of such inaccuracies on his reputation and personal life.
AI-Generated Defamation
Mr. Holmen discovered the erroneous claim when he queried ChatGPT with, “Who is Arve Hjalmar Holmen?” The chatbot responded with a fabricated story, stating: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
In reality, Mr. Holmen has three sons, none of whom were involved in such an incident. While the chatbot appeared to have some correct information about him, it fabricated a serious criminal accusation, raising major concerns about AI-generated misinformation.
Legal Action and Privacy Concerns
Digital rights organization Noyb, which has filed the complaint on Mr. Holmen’s behalf, argues that OpenAI has violated European data protection laws that require the accuracy of personal data. The complaint emphasizes that Mr. Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”
ChatGPT does include a disclaimer stating: “ChatGPT can make mistakes. Check important info.” However, Noyb argues that such a disclaimer is insufficient to mitigate the damage caused by false claims.
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” said Noyb lawyer Joakim Söderberg.
AI Hallucinations and Industry Challenges
The issue of AI hallucinations continues to be a major challenge for developers of generative AI systems. Earlier this year, Apple had to suspend its Apple Intelligence news summary tool in the UK after it produced false headlines. Similarly, Google’s AI model Gemini has faced scrutiny for generating misleading information, including a bizarre claim that geologists recommend humans eat one rock per day.
Since Mr. Holmen’s search in August 2024, OpenAI has updated ChatGPT to incorporate current news articles when retrieving information. However, Noyb argues that the AI model remains a “black box,” with OpenAI refusing access requests that could clarify how such errors occur.
Mr. Holmen’s case underscores growing concerns over the reliability of AI systems and the need for stricter regulations to prevent harm caused by misinformation. OpenAI has yet to comment on the complaint.