• Home
  • Blog
  • AI News
  • OpenAI Faces New GDPR Complaint Over ChatGPT’s False Claims

OpenAI Faces New GDPR Complaint Over ChatGPT’s False Claims

Published:March 20, 2025

Reading Time: 3 minutes

OpenAI is under fire once again in Europe as privacy watchdogs take issue with its AI chatbot, ChatGPT, for spreading false information.

The latest complaint, filed with the support of privacy rights group Noyb, highlights the serious consequences of AI-generated misinformation – this time involving an individual falsely accused of murder.

AI Hallucinations

The complaint centers on Arve Hjalmar Holmen, a Norwegian citizen who was shocked to discover that ChatGPT falsely claimed he had been convicted of murdering two of his children and attempting to kill a third.

While the chatbot got some personal details correct – such as his number of children and hometown – it fabricated a horrifying criminal past.

This is not the first time ChatGPT has generated inaccurate personal data. Previous incidents have involved errors in birth dates and biographical details. However, the gravity of this case raises urgent questions about AI’s responsibility in handling personal information.

The GDPR’s Role in AI Accountability

Under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to correct inaccurate personal data. Additionally, companies processing personal data must ensure its accuracy.

Despite this, OpenAI has largely responded to concerns by offering to block responses to problematic queries rather than providing a way for individuals to correct false information. Privacy advocates argue that disclaimers about AI mistakes are insufficient.

As Noyb’s data protection lawyer Joakim Söderberg states, “You can’t just spread false information and then add a disclaimer saying it may not be true.”

Violating GDPR can carry severe penalties, with fines reaching up to 4% of a company’s global revenue. If regulators determine that OpenAI failed to meet legal requirements, the company could face significant financial and operational consequences.

A History of Privacy Complaints Against OpenAI

This isn’t OpenAI’s first run-in with European privacy regulators. In 2023, Italy’s data protection authority temporarily banned ChatGPT due to GDPR concerns. The ban led OpenAI to implement changes, but regulators still fined the company €15 million for processing personal data without a legal basis.

Since then, European regulators have taken a more cautious approach, trying to balance innovation with compliance. However, unresolved complaints, such as one filed in Poland in 2023, suggest ongoing uncertainty about how to apply GDPR to generative AI tools.

Why This Case Stands Out

Unlike previous complaints that involved minor inaccuracies, this case highlights a serious reputational risk. Holmen’s story is not an isolated incident – ChatGPT has also falsely implicated others, including an Australian mayor in a bribery scandal and a German journalist in child abuse allegations.

Noyb argues that these errors are not just unfortunate glitches but systemic issues that require stronger regulatory intervention. The complaint against OpenAI was filed with Norway’s data protection authority, but it remains to be seen whether the case will stay in Norway or be referred to Ireland, where OpenAI’s European headquarters is based.

The Future of AI and Privacy Regulations

As AI technology advances, the challenge of preventing misinformation grows. Large language models like ChatGPT predict words based on vast datasets, sometimes leading to bizarre and harmful errors.

While OpenAI has made updates to reduce these hallucinations, privacy advocates remain concerned about how false data is stored and processed internally.

Kleanthi Sardeli, another Noyb lawyer, emphasizes that disclaimers do not absolve AI companies from compliance. “Adding a disclaimer that you do not comply with the law does not make the law go away,” she states.

With this new complaint, European regulators face mounting pressure to take decisive action. Whether this leads to stricter AI regulations, heavier fines, or significant changes to how AI models operate remains to be seen. One thing is clear: the debate over AI’s responsibility for misinformation is far from over.

Onome

Contributor & AI Expert