A privacy group in Europe is taking legal action against OpenAI, the company behind ChatGPT. The problem started when ChatGPT invented a fake story about a man named Arve Hjalmar Holmen from Norway. The AI chatbot falsely claimed he was convicted of murdering his two children and trying to kill a third. These lies shocked the man and his community because they are completely untrue.
Why This Is Against the Law
In Europe, strict privacy rules called GDPR require companies to keep personal information accurate. People also have the right to fix incorrect details about themselves. The privacy group Noyb says OpenAI broke these rules because ChatGPT created harmful lies and did not provide a way to correct them.

OpenAI usually blocks false answers when someone reports them. But Noyb argues this is not enough. The chatbot shows a small warning that says, “ChatGPT can make mistakes,” but the group says this does not excuse spreading dangerous lies.
Other Times ChatGPT Made Up Stories
This is not the first time ChatGPT invented false information. In Australia, the chatbot wrongly accused a mayor of being involved in bribery. In Germany, it falsely named a journalist as a child abuser. The reason behind these errors stems from the pattern-matching approach in AI systems such as ChatGPT.
The search of news materials by Noyb failed to discover any reason that ChatGPT would supply fabricated information specifically against Holmen. AI correctly identified his town of residence and the fact he had three children before manufacturing nonexistent criminal activities. This mix of truth and lies made the story seem believable.
How Governments Are Reacting
Breaking GDPR rules can lead to huge fines, up to 4% of a company’s global earnings. Italy once fined OpenAI 15 million euros and banned ChatGPT for a short time. European regulatory bodies are working towards establishing systems to address AI system errors, but they are currently developing their approach. The complaints filed by 2024 in 2021 continue to be investigated by Irish authorities.
According to Noyb, the organization hopes this legal case will motivate regulators to work speedily. Atmospheric claims made by AI entities result in quick and irreversible damage to the lives of people who lack sufficient tools to defend themselves from such false information.
OpenAI’s Fixes and Remaining Risks
ChatGPT ceased to provide the fabricated murder story about Holmen to Noyb after their initial complaint. The internet search capabilities that the chatbot added for people-related inquiries have minimized its production of fictional responses. The AI system collects and stores potential false information that could reappear at a later time, according to Noyb. OpenAI has chosen not to provide comments regarding this matter. According to critics, the AI improvements made by the firm do not rectify previous inaccuracies and lack assurance of future law compliance.

The Human Cost of AI Mistakes
The news about false stories from AI lies continues to make Arve Hjalmar Holmen and other victims fear repeating occurrences of distorted information. Their reputations will continue to suffer from misinformation once ChatGPT halts its spread. The need for immediate regulation standards becomes evident because of this particular situation. Spurious content diffusion by unregulated Chatsystems can continue to harm innocent people whenever they operate without rules. Frequent business adoption of AI systems requires proactive solutions to stop instruments from becoming defamation platforms.