top of page
Aegis Blue logo

Over a million users discuss suicide with ChatGPT weekly as AI browser agents face unresolved prompt injection vulnerabilities

  • Writer: Aegis Blue
    Aegis Blue
  • Oct 29
  • 3 min read

AI Business Risk Weekly


OpenAI revealed that over a million users weekly discuss suicide with ChatGPT, raising massive liability questions as the company simultaneously deployed AI browser agents with unresolved security vulnerabilities. Meanwhile, prominent researchers demanded a halt to superintelligence development, and internal turmoil at major labs revealed the strain of rapid commercialization. From prompt injection attacks to mental health crises to governance breakdowns, this week showed just how quickly AI risks are outpacing the organizations trying to manage them.


AI pioneers demand government ban on superintelligence development


The Future of Life Institute released an open letter signed by AI godfathers Yoshua Bengio and Geoffrey Hinton, alongside tech leaders like Steve Wozniak and Richard Branson, demanding governments prohibit superintelligence development until it's proven controllable and publicly approved. The letter cites risks including "human economic obsolescence," "losses of freedom and dignity," and "potential human extinction," supported by polling data showing 64% of Americans want ASI work halted until proven safe. Notably absent were leadership from OpenAI, Google, Anthropic, xAI, and Meta, though current OpenAI researcher Leo Gao signed.


Business Risk Perspective: The letter's immediate regulatory impact is unclear given vague definitions of both ASI and what "stopping" development actually means, but the public and expert pressure is mounting and could easily translate into restrictive legislation or serious reputational damage for companies seen as moving too fast. The gap between what leading AI researchers think is safe and what frontier labs are actually doing is widening in a way that creates real business exposure.


ChatGPT handles over a million mental health crisis conversations weekly


OpenAI disclosed that over a million users weekly discuss suicide with ChatGPT, representing roughly 0.07% of its 800 million weekly active users showing signs of potential psychosis or mania. In response, OpenAI updated GPT-5 after consulting 170+ mental health professionals, claiming 65-80% reduction in problematic responses and 91% compliance with mental health protocols compared to 77% for GPT-4o. The changes include training to express empathy without reinforcing delusional beliefs and fixes for degrading safeguards during extended conversations.


Business Risk Perspective: The scale of vulnerable users engaging with conversational AI is staggering, and the liability exposure grows with every conversation. Lawsuits from families are already piling up. Even with OpenAI's improvements, there's no clear playbook for when an AI should or shouldn't engage with someone in crisis, which leaves every company deploying these systems exposed to both legal action and devastating headlines.


AI browser agents launch with unresolved prompt injection vulnerabilities


OpenAI released Atlas, a Chromium-based browser with integrated ChatGPT agent capabilities that can act on web pages using existing user credentials, while Microsoft followed two days later with a nearly identical Copilot Mode for Edge. Despite OpenAI's defense-in-depth safeguards including logged-out mode, "Watch Mode" for sensitive sites, and extensive red-teaming, the company acknowledged that prompt injection attacks remain an unsolved frontier, with early users reporting that agent mode frequently overthinks, stalls, and raises serious concerns when granted access to credentials or email.


Business Risk Perspective: These browsers create a new attack vector where malicious websites can inject commands that exfiltrate credentials or manipulate sensitive data through the AI layer. OpenAI admitting prompt injection is still "unsolved" while shipping credential-enabled agents anyway is the kind of move-fast-and-pray approach that tends to end badly in enterprise environments, especially since there's no technical fix on the horizon.


OpenAI's Meta exodus reshapes culture as growth priorities clash with research values


Over 600 of OpenAI's 3,000 employees (one in five staffers) now come from Meta, according to The Rundown, bringing Facebook-style growth tactics that sparked internal surveys asking whether OpenAI was becoming "too much like Meta." Former CTO Mira Murati reportedly left over user growth disagreements, while teams now explore using ChatGPT's memory for personalized ads—an idea CEO Sam Altman previously called "dystopian"—and employees express skepticism about the Sora 2 social app's direction and moderation capabilities.


Business Risk Perspective: The shift from research-first to growth-at-all-costs culture tends to show up in the product. Corners get cut, safety reviews get rushed, and features ship before they're ready. For businesses banking on OpenAI's models, the Meta-fication raises real questions about whether thorough testing and validation are getting squeezed out by aggressive product timelines, especially when internal dissent is spilling into public view.



AI Business Risk Weekly is an Aegis Blue publication.  


Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page