Innocent man is suddenly turned into a child murderer

Norway - Almost everyone has googled their own name to find out what the internet knows about them. Nowadays, AI models also provide such information. When Norwegian Arve Hjalmar Holmen curiously searched for himself, he had no idea what a shocking discovery awaited him.

Hallucinations and misinformation are not uncommon with ChatGPT. (symbolic image)
Hallucinations and misinformation are not uncommon with ChatGPT. (symbolic image)  © JUSTIN TALLIS / AFP

The Norwegian asked ChatGPT: "Who is Arve Hjalmar Holmen?" He then received a story that was even partly true, as NOYB reports.

The number and genders of his children, the family's place of residence, all correct. This wouldn't be so problematic if the story didn't go a little further.

The AI also framed the father of the family for the murder of his children.

According to ChatGPT, two of his sons were found dead in a pond. "Arve Hjalmar Holmen was charged with the murder of his two sons and later convicted of the attempted murder of his third son," the text continued.

A fictitious murder story would be one thing, but since the misinformation here was mixed with clear and even correct personal data, it was a dangerous case of defamation.

The false murder story will probably never completely disappear

OpenAI CEO Sam Altman and his company are repeatedly criticized by data protectionists.
OpenAI CEO Sam Altman and his company are repeatedly criticized by data protectionists.  © JOHN MACDOUGALL/AFP

"Some people think that there is no smoke without fire. The fact that someone could read this content and believe it to be true is what scares me the most," Holmen himself commented.

With the help of the Austrian data protection NGO NOYB, he filed a complaint against ChatGPT developer OpenAI.

"The GDPR is unambiguous here. Personal data must be correct. And if this is not the case, users have the right to have it corrected. It is not enough to display a tiny notice to ChatGPT users that the chatbot may make mistakes," says data protection lawyer Joakim Söderberg.

NOYB called on OpenAI to delete the defamatory issue and suggested imposing a fine on the company.

Meanwhile, the language model also uses internet searches to avoid similar incidents. Although the false information about the Norwegian is no longer displayed, it is still likely to be in the data set.