ChatGPT scandalizes Norwegians: False murder allegations uncovered!
The article highlights legal challenges for OpenAI after ChatGPT incorrectly labeled a Norwegian as a murderer.
ChatGPT scandalizes Norwegians: False murder allegations uncovered!
A shocking incident is causing a stir in Europe: Norwegian Arve Hjalmar Holmen has hit the headlines after ChatGPT, OpenAI's AI-powered chatbot, spread false and offensive information about him. According to a report by Cosmo Holmen was falsely portrayed as a convicted murderer who had killed his two children and attempted to murder his third son. This horrifying narrative was mixed by the AI with real details about Holmen's life, which only made the situation worse.
These misleading “hallucinations,” in which AI tells made-up stories, have already led to devastating consequences for those affected in the past. Another example is false accusations of corruption created by technology. Holmen then took legal action to take action against OpenAI, as the General Data Protection Regulation (GDPR) sets clear requirements for the accuracy of personal data. The advocacy group Noyb supports him in this matter and has contacted the Norwegian Data Protection Authority.
Legal consequences for OpenAI
The GDPR requires companies to ensure the accuracy of data generated about individuals. Criticism came from Noyb, which states that OpenAI does not offer the possibility of correcting incorrect information and that it remains in the system. A previous complaint from Noyb in April 2024 already raised similar issues, but did not result in the desired changes. The fact that, following this incident, OpenAI has now updated the way ChatGPT works, allowing the AI to search for information about people on the internet, shows the seriousness of the situation. Holmen is no longer incorrectly listed as a murderer - but the possibility of false and defamatory information remaining in the AI remains. A lawyer for Noyb emphasized that it is not enough to display a small warning about possible errors when serious false statements about people are being spread.
While European data protection authorities are closely monitoring developments, Noyb is calling for the laws to also be enforced against AI providers. Further complaints about ChatGPT show that the problem is more complex and the importance of setting clear policies to protect personal data.