top of page
Aegis Blue logo

California’s SB 53: What Executives Need to Know About the New AI Transparency Law

  • Writer: Katy Kelly
    Katy Kelly
  • Oct 1
  • 4 min read

Updated: Oct 3

On September 29, 2025, Governor Gavin Newsom signed Senate Bill 53, also known as the Transparency in Frontier Artificial Intelligence Act (TFAIA). The law introduces new transparency, reporting, and accountability obligations for developers of frontier AI models.

If your organization is using AI systems rather than developing frontier models, SB 53 does not automatically impose new compliance duties on you. The statutory obligations target large frontier developers who exceed certain computational thresholds. That said, the law shifts the baseline of what transparency and liability look like in the ecosystem.


What SB 53 Requires of Frontier AI Developers


Here are the seven main provisions of SB 53, in statutory order, with precise framing:


  1. Frontier AI framework (safety & protocol duties) (§ 22757.12(a))


    A large frontier developer must write, implement, and publish a frontier AI framework on its website. This framework must describe how the developer incorporates national and international standards, industry best practices, thresholds for catastrophic risk, internal governance, cybersecurity practices, testing procedures, and mitigation strategies.


  2. Transparency report at deployment (§ 22757.12(c))


    When releasing a frontier model, a large frontier developer must publish a transparency report (or equivalent system/model card) disclosing the model’s capabilities, testing, risk assessment, mitigations, and other required items.


  3. Incident reporting (§ 22757.13)


    A large frontier developer must report “critical safety incidents” involving its frontier models to California’s Office of Emergency Services (OES) within 15 calendar days of discovery. OES will maintain a reporting portal and prepare aggregate, anonymized summaries.


  4. Whistleblower protections (Labor Code, starting § 1107)


    The law expands protections for “covered employees” (those working on AI/risk functions) so they cannot be subject to retaliation or gag clauses when disclosing risk information concerning catastrophic harm or violations of TFAIA. It also requires large frontier developers to provide an internal anonymous reporting channel.


  5. Statutory penalties / civil enforcement (§ 22757.15)


    The Attorney General may assess civil penalties for violations of the law (e.g. failure to publish required disclosures, false or misleading statements about frameworks, failure to report incidents). Penalties may reach up to one million dollars per violation.


  6. Preemption of local AI rules (Sec. 5 (f))


    The law preempts (i.e. supersedes) any city, county, or local regulation adopted on or after January 1, 2025, that would regulate frontier developers on catastrophic risk matters covered by TFAIA.


  7. CalCompute / public compute cluster (§ 11546.8)


    The Government Operations Agency must establish a consortium to design a framework for a public cloud computing cluster, known as CalCompute, to support socially beneficial AI research. That provision becomes operative only if funding is appropriated.


Additionally, the law tasks the California Department of Technology to review and update the definitions (e.g. “frontier model,” “large frontier developer”) and thresholds on an annual basis.


How This Affects Businesses That Use Frontier AI


Below are the practical impacts to consider for downstream users (e.g. integrators, deployers) who do not themselves qualify as frontier developers:


  1. State-level precedent with federal ripple effects


    California is home to 32 of the world's top 50 AI companies. Regulatory frameworks established here often influence other jurisdictions through direct replication or by setting de facto industry norms. While SB 53 does not create a comprehensive AI liability regime, it establishes transparency and incident reporting as baseline expectations—expectations that may migrate to procurement standards, insurance requirements, or legislation.


  1. Regulatory persistence signals continued scrutiny


    SB 53 is California's second attempt to regulate frontier AI development, following Governor Newsom's veto of the more expansive SB 1047 in September 2024. While narrower in scope, its passage demonstrates California's commitment to AI governance despite industry resistance and the next bill SB 243 might be following quickly.


    California is often the frontrunner in technology regulation and influences federal policy and other states. The passage of frontier AI legislation is a signal that regulatory attention will persist and expand. Organizations using AI systems should prepare for an environment where transparency, accountability, and incident reporting become normalized industry expectations.


  1. Transparency requirements establish new information flows and industry standards


    The mandate for large frontier developers to publish safety frameworks, transparency reports, and report incidents to the Office of Emergency Services creates structured information flows affecting downstream users:


  1. Information availability


    Foundation model providers covered by SB 53 must disclose how they incorporate standards, assess catastrophic risks, conduct testing, and implement mitigations. This mandatory reporting creates a baseline for procurement and due diligence when evaluating covered frontier models, raising the standards from the inconsistent and voluntary baseline we had until now.


  1. Standard-setting effects


    When frontier developers document their risk assessment and mitigation approaches, these practices become reference points for the broader ecosystem. Downstream deployers may increasingly be asked how their own practices incorporate, expand, and compare to the standards articulated by their foundation model providers.


    This information infrastructure does not resolve questions of liability across the AI value chain, but establishes the documentary foundation that could support clearer accountability mechanisms over time.


  1. A healthier competitive environment (speculative)


    If transparency requirements function as intended, they may contribute to a more balanced marketplace. When large frontier developers must publicly articulate safety practices and report critical incidents to state authorities, the focus shifts from competing solely on capability claims to including corresponding evidence of responsible development. This could reduce information asymmetry, allowing downstream users to make more informed risk-based decisions.


    Whether this materializes depends on how the ecosystem reacts to the new legislation, on the Attorney General's civil enforcement of statutory compliance, whether the Office of Emergency Services' annual anonymized summaries (starting 2027) reveal meaningful patterns, and whether downstream buyers actually leverage transparency artifacts in procurement decisions.


Preparing Your Organization


Though SB 53 does not impose new duties on AI adopters, it will impact the discourse around AI governance and risk. Here’s a recommended checklist to proactively align with:


Action

Why It Matters

Revise vendor due diligence

Add criteria for evaluating frontier AI frameworks, transparency reports, incident histories, and whistleblower governance

Strengthen internal risk governance

Formalize how your organization assesses vendor risk claims, monitors incident disclosures, and escalates safety issues

Monitor policy developments

Track how AI regulation evolves in other jurisdictions and how SB 53’s concepts migrate

Prepare for reputational/regulator scrutiny

Be ready to justify your choice of frontier models and your due diligence posture to stakeholders


Bottom Line


SB 53 is an early experiment in state-level AI governance. While it doesn’t directly regulate AI adopters, it reshapes the information environment they operate in. For now, the prudent move is to treat the law as a signal: watch how disclosures take shape, learn from incident reports, and adjust your governance playbook as transparency norms stabilize.



Aegis Blue ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page