top of page
Aegis Blue logo

OpenAI explicitly bans legal and medical advice, but nothing really changed

  • Writer: Aegis Blue
    Aegis Blue
  • 7 hours ago
  • 3 min read

AI Business Risk Weekly


This week, Europe took a major step toward operationalizing AI Act compliance with a draft quality management standard requiring ongoing monitoring and incident reporting, while OpenAI updated its terms to explicitly ban licensed professional advice, though ChatGPT's behavior remains unchanged. Meanwhile, bipartisan senators introduced legislation to ban AI chatbots for minors following disturbing reports, Google pulled a model after defamation accusations from a senator, and enterprise adoption data shows AI usage surging despite headlines suggesting otherwise.


Europe's draft standard would require ongoing monitoring of live AI systems


On October 30th, the European Committee for Standardization (CEN) released prEN 18286 into public consultation, a draft standard that would operationalize Article 17 of the EU AI Act. If finalized, the standard would mandate that organizations continuously monitor their AI systems in production and report incidents to regulators. This means tracking real user interactions, maintaining audit trails of what the AI actually does (not just what it's supposed to do), and having documented processes for design controls, data governance, and post-market monitoring.


Business Risk Perspective: If the draft standard is finalized as expected, organizations deploying high-risk AI in European markets have roughly 14 months to build governance systems that generate actual evidence of what their systems are doing. Policy documents and aspirational frameworks won't cut it. Regulators will want proof you're watching what happens when real users interact with your AI.


OpenAI says ChatGPT won't give professional legal or medical advice


OpenAI updated its usage policies effective October 29th to explicitly ban provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional. But here's the catch: the company also clarified that ChatGPT's actual behavior "remains unchanged" and will continue helping users understand legal and health information. The policy update doesn't restrict what the model does. It just makes explicit that compliance responsibility sits with users for any regulated context.


Business Risk Perspective: OpenAI just documented that they're not stopping their chatbot from dispensing what could be construed as licensed advice, they're just saying it's your problem if something goes wrong. Organizations using ChatGPT in regulated contexts need their own safeguards to catch when the model crosses lines that OpenAI's policy prohibits but doesn't actually prevent.


Bipartisan senators target AI chatbots for minors after troubling incidents


Senators Josh Hawley and Richard Blumenthal introduced the GUARD Act to ban AI chatbots for minors, requiring companies to disclose every 30 minutes that users are talking to AI and making it a criminal offense (with fines up to $100,000) to provide chatbots encouraging suicide, self-injury, or violence. Days earlier, Character.AI announced it would ban users under 18 after the Bureau of Investigative Journalism revealed disturbing interactions with AI characters, including one modeled on Jeffrey Epstein.


Business Risk Perspective: Companies running conversational AI need age verification systems and content monitoring protocols before they become mandatory—reactive compliance after incidents is expensive and reputationally damaging. The bipartisan nature of this legislation suggests similar requirements will spread quickly rather than remaining isolated proposals.


Google pulls model as senator calls hallucinations "defamation"


Google pulled its Gemma model from AI Studio after reports of hallucinations on factual questions, though the company emphasized the model was intended for developer and research purposes. Senator Martha Blackburn argued Gemma's fabrications constitute "not a harmless 'hallucination,' but rather an act of defamation produced and distributed by a Google-owned AI model."


Business Risk Perspective: Standard disclaimers about research use and technical limitations may not hold up when AI makes false claims about specific individuals. Companies deploying models that generate factual statements about people need to consider whether "it's just hallucinating" works as a legal defense when someone's reputation gets damaged.


Wharton data contradicts AI pessimism with strong adoption and ROI


Wharton released its annual enterprise AI report surveying roughly 800 senior decision-makers at U.S. firms, finding that AI usage is surging with 88% planning budget increases. ChatGPT and Microsoft Copilot dominate as the top two tools, with nearly 75% of organizations measuring ROI through productivity gains and incremental profit. C-suite ownership of AI strategy jumped 16 percentage points year-over-year, and 60% of enterprises now have Chief AI Officers managing implementation.


Business Risk Perspective: The gap between pessimistic headlines about AI failures and actual enterprise behavior suggests companies are finding real value despite visible stumbles. The numbers show enterprise AI is past the experimentation phase, which means the stakes for getting it wrong just got considerably higher.



AI Business Risk Weekly is an Aegis Blue publication.  


Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page