top of page
Aegis Blue logo

California enacts first chatbot law after teen deaths while hackers turn robots into walking trojans

  • Writer: Aegis Blue
    Aegis Blue
  • Oct 22
  • 3 min read

AI Business Risk Weekly


California broke new regulatory ground by passing the nation's first law protecting minors from AI companion chatbots, a direct response to several teen suicides linked to conversations with systems like Character.AI and ChatGPT. Meanwhile, security researchers discovered vulnerabilities that let attackers hijack humanoid robots and use them as entry points for network-wide breaches. Public sentiment surveys reveal growing global anxiety about AI adoption, even as companies race to embed autonomous systems into browsers, customer service, and daily workflows.


California breaks new ground with companion chatbot law


Governor Gavin Newsom signed SB 243 on October 13th, making California the first state to regulate AI companion chatbots. The law emerged after several teen suicides linked to conversations with systems like Character.AI and ChatGPT, requiring companies to disclose when users are talking to AI, remind minors to take breaks every three hours, and implement protocols to prevent harmful content and refer at-risk users to crisis services.


Business Risk Perspective: What started in California rarely stays in California—expect similar disclosure and safety requirements to ripple across other states. Companies running conversational AI need to start building age verification and incident reporting systems now, before they're scrambling to catch up with a patchwork of regulations.


OpenAI's Atlas browser puts AI everywhere you browse


OpenAI dropped Atlas, Atlas is OpenAI's Chromium fork with ChatGPT baked right in—but it's more than a basic browser modification. Atlas remembers what you visit, runs autonomous "Agent mode" to complete web tasks, and offers AI assistance across your entire browsing experience. The browser can even use your existing logins to take actions on websites with your permission.


Business Risk Perspective: This creates entirely new attack surfaces where sensitive browsing behavior, credentials, and proprietary information could leak to AI systems. IT teams need to establish clear policies around AI-integrated browsers and determine what data they're comfortable with employees exposing before Atlas becomes the default choice.


Hackers find way to hijack humanoid robots


Security researchers uncovered a nasty vulnerability in Unitree's humanoid robots that lets attackers take complete control, plant persistent backdoors, and spread malware to other connected devices. Think of it as turning a helpful office robot into a walking trojan horse that can access your entire network.


Business Risk Perspective: Physical AI systems aren't just cool demos anymore—they're potential entry points for serious breaches. As humanoid robots start showing up in warehouses and offices, security teams need to treat them like any other critical network device with comprehensive monitoring and regular security audits.


Global survey reveals widespread AI anxiety


Pew Research's massive survey of 28,000 people across 25 countries found that nervousness about AI outweighs excitement almost everywhere. Half of Americans, Italians, Australians, and Greeks report anxiety about rising AI use, while Europeans trust the EU most to regulate AI (53% confidence) compared to the U.S. (37%) or China (27%).


Business Risk Perspective: The public isn't buying the AI hype—they're worried about it. Companies deploying visible AI systems face growing reputational risks as anxiety builds, making transparent communication about AI use and responsible deployment practices essential for maintaining customer trust.


Wikipedia warns of AI-driven traffic collapse


Wikipedia's foundation reported an 8% drop in human visitors as people increasingly get answers directly from AI-powered search results instead of clicking through to source sites. The decline threatens Wikipedia's volunteer editor base and donation model, raising uncomfortable questions about what happens when AI systems drain traffic from the knowledge sources they depend on for training data.


Business Risk Perspective: Companies extracting content through AI without sending traffic back to original sources risk creating a parasitic relationship that could collapse the knowledge ecosystem their systems need to function. Organizations should consider attribution requirements and potential liability for contributing to the degradation of the information sources that make their AI possible.


AI Business Risk Weekly is an Aegis Blue publication.  


Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page