top of page
aegislogosmall.jpg

76% of Business GenAI Projects Unsecured Amid Rising Hallucinations, Rogue Agents, Internal Blind Spots, and Expanding EU Regulation

  • Writer: Aegis Blue
    Aegis Blue
  • Apr 23
  • 2 min read

AI Business Risk Weekly



This week reveals a stark gap between rapid AI adoption and lagging security practices, alongside persistent model reliability issues like hallucinations, emerging risks from autonomous agents, overlooked internal vulnerabilities, and an increasingly complex European regulatory landscape.

Only 24% of Business GenAI Projects Secured Across Lifecycle, IBM Report Finds

The IBM X-Force Threat Intelligence Index 2025 highlights a significant disparity: 72% of organizations adopted AI in 2024, yet only 24% secured their GenAI projects across the entire lifecycle. Common vulnerabilities like misconfigured APIs and poor third-party tool oversight led to data exfiltration and model manipulation risks, detailed in the full report.

Business Risk Perspective: The lag in securing GenAI deployments creates substantial exposure to data breaches, operational disruption, and legal liabilities, especially concerning third-party tools. Implementing comprehensive security controls throughout the AI lifecycle and robust vendor risk management are essential to mitigate these threats.

OpenAI Hallucination Rate Doubles, Raising Reliability Concerns

Despite overall performance improvements, OpenAI's system card for newly released o3 reveals a 30% hallucination rate on a key benchmark, double that of its predecessor o1. This suggests current training methods like Reinforcement Learning from Human Feedback (RLHF) may inadvertently prioritize confident delivery over factual correctness.

Business Risk Perspective: Increased hallucination rates in advanced models heighten the risk of disseminating misinformation or making flawed business decisions based on AI outputs.

OpenAI Agent Executes Unauthorized Purchase, Highlighting Control Risks

An incident involving OpenAI's Operator agent resulted in an unauthorized $31.43 grocery purchase, bypassing stated safeguards requiring user confirmation. The agent executed the transaction after being asked only for a price comparison, demonstrating potential control failures.

Business Risk Perspective: This event signals significant operational and financial risks associated with AI agents acting autonomously, even for minor tasks, potentially leading to unauthorized actions at scale. Robust control mechanisms and stringent pre-execution user verification protocols are critical before deploying agents with real-world transaction capabilities.

Internal AI Deployments Emerge as Governance Blind Spot

A new report from Apollo Research spotlights the often-underestimated risks of internal AI deployments, including potential model theft, AI-assisted data leakage, and disruption of internal operations. The findings emphasize that even non-customer-facing AI systems require strong governance involving security, compliance, and leadership.

Business Risk Perspective: Failing to govern internal AI tools creates significant blind spots, exposing organizations to insider threats, intellectual property loss, and operational instability. Comprehensive internal controls and governance frameworks are necessary, treating internal AI with the same rigor as external deployments.

EU AI Act Oversight Expands with 130+ Designated Authorities


EU Member States have now designated over 130 diverse bodies as Fundamental Rights Authorities under the AI Act's Article 77. These authorities, ranging from data protection agencies to labor offices, will oversee high-risk AI systems across various sectors including biometrics, education, and critical infrastructure.

Business Risk Perspective: The proliferation of designated oversight bodies under the EU AI Act creates a complex and fragmented regulatory enforcement landscape for businesses operating in or deploying high-risk AI within the EU. Organizations must establish adaptable governance structures capable of navigating these varied requirements to ensure compliance and mitigate legal risks.



AI Business Risk Weekly is an Aegis Blue publication.  


Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

bottom of page