OpenAI Faces Legal Action Over ChatGPT’s False Accusations Against Norwegian Man

Vienna: OpenAI is under legal scrutiny after its AI chatbot, ChatGPT, falsely described a Norwegian man as a murderer, raising concerns about the platform’s accuracy and potential harm to reputations.
The Vienna-based privacy group Noyb (“None of Your Business”) has filed a complaint with the Norwegian Data Protection Authority, highlighting that ChatGPT has been known to generate false and damaging claims about individuals—including accusations of corruption, child abuse, and even murder.
In this case, Norwegian user Arve Hjalmar Holmen was shocked to discover that ChatGPT had fabricated a disturbing story about him. The chatbot wrongly claimed he was a convicted criminal who murdered two of his children and attempted to kill a third—while also incorporating real elements of his personal life.
“Some people believe that ‘there is no smoke without fire.’ The thought that someone could read this and assume it’s true is what frightens me the most,” Holmen said.
Noyb argues that OpenAI’s system fails to ensure accuracy, violating EU data protection laws, which require personal data to be correct and allow individuals the right to have false information corrected. The organization is urging regulators to order OpenAI to delete the defamatory content, fine-tune its model to prevent such errors, and impose a financial penalty.
While a recent update allows ChatGPT to fetch live internet data, meaning Holmen is no longer falsely identified as a murderer, Noyb insists that the original misinformation still lingers within the system.
OpenAI has yet to respond to requests for comment.
This isn’t the first legal challenge against OpenAI—Noyb previously filed a similar complaint in Austria, criticizing the chatbot for generating incorrect information that users have no way to correct.
News Source : “Information for this article was gathered from a variety of reliable news outlets.”








