Introduction to the Issue
OpenAI is facing a new privacy complaint in Europe over its AI chatbot, ChatGPT, which has a tendency to "hallucinate" false information. This complaint, supported by the privacy rights advocacy group Noyb, involves an individual from Norway who discovered that ChatGPT had falsely claimed he was convicted of murdering two of his children and attempting to kill the third.
Background on Previous Complaints
Previous privacy complaints about ChatGPT have centered around issues such as incorrect birth dates or biographical details. A significant concern is that OpenAI does not provide a mechanism for individuals to correct false information generated about them by the AI. Typically, OpenAI has offered to block responses to such prompts, but under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to rectification of personal data.
The GDPR and Accuracy of Personal Data
The GDPR requires data controllers to ensure that the personal data they produce about individuals is accurate. Noyb is highlighting this requirement with its latest ChatGPT complaint, emphasizing that displaying a disclaimer about potential mistakes is not sufficient to fulfill this obligation. "The GDPR is clear. Personal data has to be accurate," said Joakim Söderberg, a data protection lawyer at Noyb. "If it’s not, users have the right to have it changed to reflect the truth."
Potential Consequences of Non-Compliance
Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover. Enforcement actions could also force changes to AI products. For example, an early GDPR intervention by Italy’s data protection watchdog led to ChatGPT access being temporarily blocked in the country and subsequently resulted in OpenAI being fined €15 million for processing people’s data without a proper legal basis.
Regulatory Approach to GenAI
Since then, European privacy watchdogs have adopted a more cautious approach to GenAI, trying to figure out how best to apply the GDPR to these tools. There has been a suggestion that regulators should take time to understand how the law applies rather than rushing into bans. A complaint against ChatGPT in Poland has been under investigation since September 2023 without a decision, indicating the complexity of these issues.
Noyb’s New Complaint
Noyb’s new complaint aims to prompt privacy regulators to take action against the dangers of hallucinating AIs. The complaint involves a screenshot showing ChatGPT’s false and defamatory response about an individual, including a claim that he was convicted of child murder. Noyb notes that while the chatbot got some details correct, such as the individual having three children and their genders, it fabricated a gruesome and entirely false history.
Concerns Over False Information
The concern is not just about the dissemination of false information but also about the potential retention of such information within the AI model. Noyb and the individual involved are worried that incorrect and defamatory information could still be processed internally, even if it’s no longer displayed to users. "AI companies cannot just ‘hide’ false information from users while they internally still process false information," said Kleanthi Sardeli, another data protection lawyer at Noyb.
Call for Compliance
Noyb is calling for AI companies to stop acting as if the GDPR does not apply to them. The organization believes that if hallucinations are not stopped, people can easily suffer reputational damage. The complaint has been filed with the Norwegian data protection authority, targeting OpenAI’s U.S. entity and arguing that its Ireland office is not solely responsible for product decisions impacting Europeans.
Ongoing Investigations
An earlier Noyb-backed GDPR complaint against OpenAI, filed in Austria in April 2024, was referred to Ireland’s DPC. When asked for an update, the DPC confirmed that the complaint is still under investigation, with no timeline provided for its conclusion. The ongoing nature of these investigations highlights the challenges regulators face in addressing the complex issues surrounding AI-generated content and data protection.
Source Link