Skip to main content

Introduction to the Issue
It has been over two years since ChatGPT made its debut on the world stage, and while OpenAI has made significant advancements, several challenges persist. One of the most significant concerns is the issue of "hallucinations," where the AI states false information as factual. Recently, the Austrian advocacy group Noyb filed its second complaint against OpenAI, citing a specific instance where ChatGPT wrongly accused a Norwegian man of being a murderer.

The Complaint
The complaint alleges that when the man asked ChatGPT about himself, the AI responded with false information, including the claim that he was sentenced to 21 years in prison for killing two of his children and attempting to murder his third. The response also included real information, such as the number of children he had, their genders, and the name of his hometown. Noyb claims that this response violates the General Data Protection Regulation (GDPR), which requires personal data to be accurate.

Noyb’s Statement
According to Noyb data protection lawyer Joakim Söderberg, "The GDPR is clear: personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth." Söderberg emphasized that simply displaying a disclaimer stating that the chatbot can make mistakes is not sufficient. "You can’t just spread false information and then add a small disclaimer saying that everything you said may not be true," he stated.

Previous Instances of Hallucinations
This is not the first instance of ChatGPT’s hallucinations. Other notable cases include accusing a man of fraud and embezzlement, a court reporter of child abuse, and a law professor of sexual harassment. These instances have been reported by multiple publications and highlight the need for OpenAI to address this issue.

Noyb’s First Complaint
Noyb’s first complaint to OpenAI about hallucinations, filed in April 2024, focused on a public figure’s inaccurate birthdate. OpenAI responded by stating that it couldn’t change information already in the system, but could block its use on certain prompts. ChatGPT’s disclaimer states that it "can make mistakes."

Conclusion
The question remains whether the logic that "everyone makes mistakes" applies to an incredibly popular AI-powered chatbot like ChatGPT. As Noyb’s latest complaint highlights, the consequences of these mistakes can be severe. It will be interesting to see how OpenAI responds to this complaint and whether it will take steps to address the issue of hallucinations.


Source Link