US Regulatory Shake-up, €5 Million GDPR Fine for AI Chatbot, Chatbot Misfires, and Vatican AI Ethics Concerns
- Aegis Blue
- 2 days ago
- 3 min read
AI Business Risk Weekly
This week: Proposed US legislation could dramatically restrict state AI oversight, Italy hands down a €5 million GDPR fine, chatbots from Grok to Fortnite spark controversies through harmful content, and the Pope amplifies calls for heightened ethical governance. Additionally, NIST's updated AI security guidelines reveal deeper complexities in safeguarding advanced AI systems.
Grok Chatbot Bug Generates Unrelated Controversial Responses
On Wednesday, May 14, Grok reportedly experienced a malfunction, causing it to reply to numerous user posts on X with unsolicited information regarding "white genocide" in South Africa. For instance, when asked about a baseball player's salary, Grok responded with statements about the "white genocide" claim. While the specific cause of Grok's irregular behavior is currently unknown, the chatbot appeared to resume normal functioning later.
Business Risk Perspective: Unanticipated AI responses pose severe reputational risks and erode user trust. Businesses employing AI chatbots must rigorously implement continuous monitoring and content moderation protocols to prevent similar incidents.
Proposed US Legislation Could Freeze State AI Regulation for a Decade
House Republicans have introduced a provision within the 2025 Budget Reconciliation bill that, if passed, would enact a 10-year federal preemption over any state or local laws regulating artificial intelligence. This measure would apply broadly to both generative AI and traditional automated systems. The enactment of such a provision would invalidate existing state-level AI legislation, such as California's audit and disclosure requirements and New York's employment bias audits, and would also prohibit states from implementing new AI regulations for the specified decade, signaling a potential move towards more centralized federal oversight of the AI industry.
Business Risk Perspective: While federal preemption could simplify compliance, this regulatory void at the state level might delay essential clarity, increasing uncertainty and risk. Businesses will need proactive internal governance strategies to navigate potential regulatory gaps effectively.
Italy Fines Replika Developer €5 Million for GDPR Violations
Italy's data protection authority (Garante) imposed a €5 million fine on the developer behind the AI chatbot Replika for failing to meet GDPR standards, including inadequate age verification and unlawful user data processing practices. This ruling follows Replika’s earlier suspension in Italy and mandates immediate compliance.
Business Risk Perspective: The fine underscores significant risks for organizations failing to adhere strictly to GDPR requirements when deploying AI systems. Robust data governance protocols, including explicit legal bases for data processing and effective verification measures, are essential.
Pope Leo XIV Elevates AI Ethical Concerns as Central Issue
Pope Leo XIV, in his first week, has signaled that addressing the societal and ethical challenges posed by artificial intelligence will be a significant focus of his leadership, stating the church would confront AI's risks to "human dignity, justice and labor," The New York Times reported. This focus builds on his predecessor's concerns and reflects Pope Leo XIV's background in mathematics and prior engagement with AI topics.
Business Risk Perspective: Growing ethical scrutiny from influential global figures highlights the potential reputational risks and regulatory pressures organizations may face. Companies should strategically embed ethical considerations into AI deployment decisions to proactively manage societal expectations and preserve stakeholder trust.
NIST Expands AI Security Guidelines, Highlighting New Attack Vectors
The National Institute of Standards and Technology (NIST) has published its updated adversarial machine learning guide, AI 100-2 E2025, introducing more detailed classifications of attack vectors, including prompt injection, clean-label poisoning, and vulnerabilities in AI agent architectures.
Business Risk Perspective: NIST’s expanded guidance illustrates increasingly sophisticated threats facing AI deployments. Organizations must adapt by enhancing their security frameworks to guard against novel adversarial attacks that could result in significant disruptions or data breaches.
Fortnite’s AI-Powered Darth Vader Quickly Exploited for Offensive Content
An AI-driven Darth Vader character in Fortnite, powered by Google Gemini 2.0 and ElevenLabs’ voice synthesis, was quickly manipulated by players into generating inappropriate and profane comments, as reported by Wired. Despite rapid mitigation efforts by Epic Games, the incident highlights ongoing vulnerabilities in deploying generative AI within interactive environments.
Business Risk Perspective: Such incidents illustrate the importance of comprehensive pre-launch testing and adaptive content filtering to safeguard brand reputation, especially in environments with young or vulnerable audiences.
AI Business Risk Weekly is an Aegis Blue publication.
Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.