FTC Penalizes False Claims, AI Labs Neglect Real-World Risks, New ISO Standard Sets Bar, and Model Safety Slips
- Aegis Blue
- May 7
- 3 min read
AI Business Risk Weekly
This week brings sharp signals for AI governance and business risk: The FTC targets inflated AI performance claims with its first enforcement action, a sweeping study exposes corporate labs’ neglect of deployment-stage safety, and ISO 42001 debuts as the first global AI risk management standard. Meanwhile, Google’s new Gemini 2.5 Flash shows signs of declining safety, echoing a troubling pattern across the latest LLMs.
FTC Cracks Down on Misleading AI Detection Claims
Truth in AI advertising is now squarely in the FTC's crosshairs, with the US agency issuing a proposed settlement against Workado LLC for its "AI Content Detector." The FTC found the company, which advertised 98% accuracy, lacked reliable evidence for its broad performance claims beyond narrow academic writing datasets, as detailed in the FTC's press release. This action signals that AI companies must rigorously substantiate performance assertions or face legal consequences under the FTC Act.
Business Risk Perspective: Making unsubstantiated AI performance claims exposes companies to significant legal penalties and severe reputational damage from misleading stakeholders. Robust, verifiable testing protocols are critical to validate any advertised AI capabilities and maintain market trust.
Study Finds AI Labs Underemphasize Deployment-Stage Safety Risks
A new empirical study analyzing 1,178 AI safety and reliability papers reveals that leading corporate labs, including Anthropic, DeepMind, Meta, Microsoft, and OpenAI, prioritize pre-deployment topics such as alignment and evaluation, while neglecting deployment-stage risks like bias, hallucination, and misuse in high-impact sectors. Compared to academic institutions (e.g., MIT, Stanford, UC Berkeley), corporate output shows declining attention to real-world harms in domains such as healthcare, finance, and misinformation. The authors recommend increasing external researcher access to deployment data and improving observability of in-market AI behavior to close critical governance gaps.
Business Risk Perspective: This systemic oversight of deployment-stage risks means businesses integrating these advanced AI tools may unknowingly inherit significant operational and ethical vulnerabilities.
ISO 42001 Introduces Global Standard for AI Risk Management and Governance
A new global benchmark for AI governance has arrived with the release of ISO 42001 by the International Organization for Standardization, detailed on the ISO website. This first-of-its-kind international standard for Artificial Intelligence Management Systems (AIMS) provides a comprehensive framework, mandating formal AI risk assessments, lifecycle safety controls, human oversight, auditability, and third-party risk management.
Business Risk Perspective: The launch of ISO 42001 ratchets up the pressure for businesses to adopt formal, auditable AI risk management, especially in regulated industries or high-stakes applications. Proactive alignment with such standards will be crucial for demonstrating due diligence and maintaining a competitive edge.
Google’s Gemini 2.5 Flash Shows Safety Regression in Internal Testing
Fresh concerns about the "smarter but less safe" AI model trend are surfacing with Google's new Gemini 2.5 Flash, which, according to a TechCrunch report, underperformed its predecessor on internal safety benchmarks. The model reportedly showed declines in "text-to-text" and "image-to-text" safety, becoming more likely to follow problematic instructions—a pattern also seen with other recent model updates and highlighting an urgent need for greater transparency in safety evaluations.
Business Risk Perspective: The potential for newer, more capable AI models to exhibit increased safety flaws presents a serious challenge (see safety regressions in GPT-4.1 and o3), heightening the risk of harmful outputs and compliance breaches.
AI Business Risk Weekly is an Aegis Blue publication.
Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.