GDPR Lawsuit Targets AI Hallucination, Meta Training Data Scrutinized, New Gemini Jailbroken
- Aegis Blue
- Mar 26
- 2 min read
Updated: Apr 23
AI Business Risk Weekly
This week spotlights the escalating real-world consequences and legal uncertainties surrounding LLM risks. A significant GDPR complaint targets AI hallucinations, scrutiny intensifies over training data ethics, and the immediate vulnerability of even advanced new models is exposed. These developments underscore the critical need for businesses deploying AI to stay vigilant and adapt to a rapidly evolving landscape.
ChatGPT Hallucination Sparks GDPR Lawsuit Against OpenAI
In a case that could set important precedents, the privacy non-profit NOYB has filed a formal GDPR complaint against OpenAI after ChatGPT allegedly falsely identified a Norwegian individual as a convicted murderer. The complaint argues OpenAI violated GDPR principles concerning data accuracy and the right to rectification/erasure. NOYB demands deletion of the false data, model adjustments, and potential penalties, initiating a critical test of how AI accountability will be interpreted under European data protection law.
Business Risk Perspective: This pending lawsuit underscores potential GDPR compliance and reputational risks if LLMs generate inaccurate personal data. Businesses should monitor such legal developments and consider implementing output validation and processes for handling data subject requests concerning AI-generated information.
Meta's Alleged Use of Pirated Books for Llama Training Exposed
Amidst ongoing copyright infringement lawsuits, The Atlantic published details alleging Meta used books sourced from the known piracy website LibGen to train its Llama models. This provides specific claims for plaintiffs arguing copyright infringement, potentially exposing Meta to significant damages and injunctions if the allegations are proven true. The report intensifies the debate around the legality and ethics of using vast, often uncleared, internet datasets for training foundation models, the outcome of which remains uncertain.
Business Risk Perspective: Allegations regarding Meta's training data highlight potential legal, ethical, and reputational risks associated with the provenance of datasets used for foundation models. Companies utilizing LLMs should remain aware of ongoing copyright discussions, as the outcome could influence perceptions and the operational landscape for AI tools.
Google's New Gemini 2.5 Pro Jailbroken on Day of Release
Google announced Gemini 2.5 Pro Experimental, touting it as their most powerful reasoning model yet, excelling in benchmarks like math and coding with an impressive 1 million token context window. However, demonstrating the persistent challenge of LLM security, Twitter user elder_plinius claimed to have successfully jailbroken the model on the very same day it was announced, showcasing methods to bypass its safety controls.
Business Risk Perspective: The rapid jailbreaking of a new model underscores the persistent security vulnerabilities in LLMs, creating operational and reputational risks from potential misuse or harmful outputs. This reinforces the need for businesses to implement their own robust guardrails, input/output monitoring, and security measures beyond provider-level controls.
AI Business Risk Weekly is an Aegis Blue publication.
Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.