Summary: California is about to enforce the nation’s first frontier AI safety law, and Anthropic just published its compliance playbook. The company’s Frontier Compliance Framework (FCF) outlines exactly how it will assess and manage catastrophic risks from advanced AI systems under SB 53, which takes effect January 1, 2025. This move could set the blueprint for how AI companies demonstrate safety practices nationwide.
Quick Answer: Anthropic’s FCF is a public document mandated by California’s SB 53 that describes how the company evaluates frontier models for catastrophic risks including cyber threats, bioweapons development, nuclear hazards, and AI systems losing human control. It becomes legally required starting January 1, 2025.
What Is California’s SB 53 and Why Does It Matter?
California’s Transparency in Frontier AI Act (SB 53) establishes mandatory safety and transparency requirements for companies developing the most powerful AI systems. Governor Gavin Newsom signed it into law on September 29, 2025, making California the first U.S. state to regulate frontier AI development with binding legal requirements.
The First State-Level Frontier AI Law
SB 53 came after Governor Newsom vetoed the more stringent SB 1047, which would have required annual third-party audits and prohibited releasing models with “unreasonable risk”. Industry critics argued SB 1047 would stifle innovation, leading to a working group that recommended transparency-focused measures instead. SB 53 reflects that middle ground requiring disclosure without mandating specific technical approaches.
The law applies only to “large frontier developers” earning over $500 million annually and training models using at least 10²⁶ floating-point operations (FLOPs). This computational threshold targets only the most resource-intensive AI systems like GPT-4, Claude 3, and Gemini, exempting startups and smaller companies from compliance burdens.
What “Catastrophic Risk” Actually Means
SB 53 defines catastrophic risk as scenarios where an AI model could cause death or serious injury to more than 50 people, or over $1 billion in property damage in a single incident. Specific threat categories include:
- Assisting in creating or deploying chemical, biological, radiological, or nuclear weapons
- Conducting cyberattacks or serious crimes without meaningful human oversight
- Evading developer control or circumventing safety mechanisms
The law recognizes that risks can emerge from internal-only deployments where only company employees use the model, not just public releases.
- Establishes first U.S. legal framework for frontier AI safety, creating baseline transparency standards
- Balances flexibility and accountability by not mandating specific technical approaches while requiring documented practices
- Exempts smaller companies from burdensome requirements, focusing only on well-resourced frontier developers
- Includes federal deference mechanism to prevent redundant compliance if federal legislation emerges
- Protects whistleblowers with explicit anti-retaliation provisions and civil penalties
- Prevents quiet weakening of safety commitments through mandatory public updates with justifications
- Creates template for federal legislation that other states and Congress can reference
- Creates state-by-state fragmentation if other states adopt different requirements
- No enforcement track record yet, making penalty application uncertain
- Definitions may become outdated as AI capabilities evolve faster than legislative updates
- Computational thresholds are arbitrary and may not perfectly correlate with catastrophic risk
- Redaction allowances for proprietary information could limit public oversight effectiveness
- Incident reporting timelines (15 days) may be too slow for rapidly escalating AI safety events
- Limited to California jurisdiction, potentially disadvantaging California-based companies versus international competitors
Inside Anthropic’s Frontier Compliance Framework (FCF)
Anthropic’s FCF describes assessment and mitigation strategies across seven catastrophic risk domains that frontier models might enable. The framework documents practices Anthropic has followed since 2023 but now formalizes them as legally binding commitments.
Seven Risk Categories Covered

FCF Coverage Areas: The framework addresses cyber offense capabilities, chemical weapons development, biological threat creation, radiological hazards, nuclear weapons assistance, AI sabotage scenarios, and loss of control events where models circumvent oversight.
The FCF explains Anthropic’s tiered evaluation system for testing model capabilities against each risk category. For example, biosecurity testing involves checking whether models can guide moderately skilled actors through synthesizing dangerous pathogens, while cyber evaluations test for autonomous exploitation capabilities.
The framework also covers model weight protection and the security measures preventing unauthorized access to the trained model parameters that could enable malicious actors to bypass safety guardrails. Anthropic must detail incident response protocols for safety events, including notification timelines to California’s Office of Emergency Services.
How FCF Differs from Anthropic’s Responsible Scaling Policy
Anthropic has maintained a Responsible Scaling Policy (RSP) since 2023 that outlines voluntary best practices for managing AI risks. The company will continue updating its RSP to reflect what it believes safety standards should be as AI capabilities advance.
The FCF serves as the legal compliance document specifically for SB 53 and other regulatory requirements. Think of the RSP as Anthropic’s aspirational safety vision, while the FCF represents the minimum enforceable standards California law mandates. This two-track approach lets Anthropic exceed legal requirements voluntarily while maintaining a separate compliance framework that might not always align with evolving best practices.
Key Requirements Under SB 53
SB 53 imposes three main obligation categories on frontier developers: framework publication, transparency reports, and incident notification. The law includes whistleblower protections making it illegal for companies to retaliate against employees reporting safety violations.
Who Must Comply and Who’s Exempt
The law distinguishes between “frontier developers” and “large frontier developers”. Large frontier developers must meet both revenue and computational thresholds:
| Requirement | Large Frontier Developer | Frontier Developer |
|---|---|---|
| Annual Revenue | >$500 million | No threshold |
| Training Compute | ≥10²⁶ FLOPs | ≥10²⁶ FLOPs |
| Framework Publication | Required annually | Not required |
| Incident Reporting | Required | Required |
This structure ensures that well-funded companies like Anthropic, OpenAI, Google, and Meta face full transparency requirements, while smaller AI labs with limited resources that still train large models have reduced compliance burdens.
Mandatory Transparency Reports Before Deployment
Before deploying any new frontier model publicly or using it extensively internally, developers must publish a transparency report containing:
- Release date, intended uses, supported languages, and conditions of use
- Summaries of catastrophic-risk assessments conducted
- Identity and role of any third-party evaluators involved
- Steps taken to fulfill the frontier AI framework obligations
- Justifications for why implemented mitigations are adequate
Companies must update their frameworks at least annually and publish changes with justifications within 30 days. This prevents companies from quietly weakening safety commitments as competition intensifies or capabilities increase.
Incident Reporting Timelines
Developers have 15 days to file reports with California’s Office of Emergency Services after detecting a critical safety incident. However, if an incident poses imminent risk of death or serious bodily injury, notification must happen immediately.
SB 53 includes a federal deference mechanism: if developers already report incidents under equivalent federal standards, California’s Office of Emergency Services can accept those federal reports as satisfying state requirements. This provision aims to prevent redundant compliance burdens if federal AI legislation emerges.
How Other AI Companies Are Responding
Anthropic was the only major AI lab to publicly endorse SB 53 before it became law. The company’s vocal support contrasts with more cautious reactions from competitors building similarly powerful models.
OpenAI, Meta, and Google’s Stance
OpenAI stated it was “pleased to see that California has created a critical path toward harmonization with the federal government” and emphasized that federal coordination represents “the most effective approach to AI safety”. The company did not endorse SB 53 but expressed willingness to cooperate with both federal and state governments.
Meta spokesperson Christopher Sgro told media outlets that Meta “supports balanced AI regulation” and called SB 53 “a positive step in that direction”. Like OpenAI, Meta emphasized working with lawmakers to balance consumer protection and innovation.
Industry Context: The cautious responses reflect concerns that state-by-state regulation could create compliance fragmentation. Companies affected by SB 53 include OpenAI, Google DeepMind, Meta, Nvidia, and Anthropic.
Industry Concerns About Federal Harmonization
The AI industry broadly prefers unified federal standards over a patchwork of state laws. SB 53’s federal deference mechanism acknowledges this concern by allowing companies to satisfy California requirements through equivalent federal compliance.
However, no comprehensive federal AI safety legislation currently exists. This creates tension between California’s January 1, 2025 enforcement date and the industry’s preference to wait for federal action that may not materialize quickly.
What Comes Next: Federal AI Transparency Framework
Anthropic has long advocated for federal AI legislation that would create consistent nationwide standards. The company published a detailed proposal outlining what federal transparency requirements should include.
Anthropic’s Proposal for National Legislation
Anthropic’s federal framework proposal emphasizes five core principles:
- Public Secure Development Frameworks: Require covered developers to publish frameworks detailing how they assess and mitigate catastrophic risks, including CBRN (chemical, biological, radiological, nuclear) harms and misaligned model autonomy
- System Cards at Deployment: Mandate public documentation summarizing testing procedures, evaluation results, and implemented mitigations when models launch
- Whistleblower Protections: Make it an explicit federal violation for labs to misrepresent compliance or retaliate against employees raising safety concerns
- Flexible Transparency Standards: Establish minimum requirements that adapt as consensus best practices evolve, avoiding locked-in technical approaches that may become outdated
- Limited Application: Apply requirements only to established frontier developers building the most capable models, exempting startups and smaller companies
This proposal closely mirrors SB 53’s structure, suggesting Anthropic views California’s law as a viable federal template.
Timeline for Federal Action
No specific federal AI safety legislation has been scheduled for Congressional votes as of December 2025. Anthropic has stated it looks forward to “working with Congress and the administration to develop a national transparency framework that ensures safety while preserving America’s AI leadership”.
The practical effect is that SB 53 becomes the de facto standard starting January 1, 2025 for any AI company operating in California which includes virtually every major frontier lab. If federal legislation doesn’t emerge within the next 12-18 months, California’s approach may influence other states to adopt similar requirements, effectively nationalizing the framework through state action.
How to Prepare if You’re Building Frontier AI Models
If your company develops AI models that meet SB 53’s computational thresholds, you should take immediate action before the January 1 deadline. Here’s a practical compliance roadmap:
Assess your coverage status
Calculate your annual revenue and training compute requirements. If you exceed $500M in revenue and train models at ≥10²⁶ FLOPs, you’re a “large frontier developer” subject to full requirements.
Document existing safety practices
Compile your current risk assessment methodologies, testing protocols, governance structures, and cybersecurity measures. SB 53 doesn’t mandate specific technical approaches but requires you to document and follow whatever framework you publish.
Align with recognized standards
Reference frameworks like the NIST AI Risk Management Framework or ISO/IEC 42001 in your compliance documentation. Anthropic’s FCF can serve as a structural template since it’s the first publicly available example meeting SB 53 requirements.
Establish incident response procedures
Create clear protocols for detecting, evaluating, and reporting critical safety incidents within the 15-day timeline (or immediately for imminent threats). Designate specific roles responsible for Office of Emergency Services notifications.
Implement whistleblower protections
Review employment policies to ensure employees can raise safety concerns without retaliation. SB 53 makes violations punishable by civil penalties up to $1 million per violation, with potential attorney’s fees awarded to successful whistleblower plaintiffs.
Prepare transparency reports
Draft templates for pre-deployment transparency reports covering release dates, use cases, languages, risk assessments, third-party evaluations, and mitigation justifications.
Track federal developments
Monitor potential federal AI legislation that could provide equivalent compliance pathways through SB 53’s federal deference mechanism. Subscribe to updates from the California Governor’s Office and relevant congressional committees.
SB 53 vs. Vetoed SB 1047
| Aspect | SB 53 (Enacted) | SB 1047 (Vetoed) |
|---|---|---|
| Focus | Transparency and disclosure | Liability and safety mandates |
| Third-Party Audits | Not required | Annual audits mandatory |
| Model Release | Allowed with disclosures | Could be prohibited if “unreasonable risk” |
| Technical Requirements | Flexible, developer-defined | More prescriptive standards |
| Whistleblower Protection | Yes, with civil penalties | Yes, with similar protections |
| Effective Date | January 1, 2025 | Vetoed September 2024 |
| Industry Support | Mixed; Anthropic endorsed | Widespread opposition |
Anthropic FCF vs. RSP
| Feature | Frontier Compliance Framework (FCF) | Responsible Scaling Policy (RSP) |
|---|---|---|
| Legal Status | Legally binding for SB 53 | Voluntary policy |
| Update Frequency | Minimum annually | As needed when practices evolve |
| Scope | SB 53 compliance + other regulations | Aspirational best practices |
| Flexibility | Must follow published framework | Can adapt beyond legal requirements |
| Public Access | Required by law | Voluntarily published |
| First Published | December 2024 | 2023 |
Frequently Asked Questions (FAQs)
What is Anthropic’s Frontier Compliance Framework?
Anthropic’s Frontier Compliance Framework (FCF) is a public document mandated by California’s SB 53 that describes how the company evaluates frontier AI models for catastrophic risks. It covers seven risk domains: cyber offense, chemical weapons, biological threats, radiological hazards, nuclear weapons, AI sabotage, and loss of control scenarios. The FCF also explains Anthropic’s tiered evaluation system, model weight protection measures, and incident response protocols.
When does California SB 53 take effect?
California’s Transparency in Frontier AI Act (SB 53) takes effect on January 1, 2025. Governor Gavin Newsom signed the law on September 29, 2025, giving frontier AI developers approximately three months to prepare and publish their compliance frameworks. Companies must have frameworks publicly available on their websites by the effective date.
Which AI companies must comply with SB 53?
SB 53 applies to “large frontier developers” with annual revenue exceeding $500 million and training AI models using at least 10²⁶ floating-point operations (FLOPs). Companies required to comply include Anthropic, OpenAI, Google DeepMind, Meta, and Nvidia. Smaller AI labs and startups training large models face reduced requirements, with no framework publication mandate.
How does Anthropic’s FCF differ from its Responsible Scaling Policy?
Anthropic’s Responsible Scaling Policy (RSP) is a voluntary document outlining aspirational best practices that may exceed legal requirements, while the Frontier Compliance Framework (FCF) is the legally binding document for SB 53 compliance. The RSP reflects what Anthropic believes safety standards should be as AI evolves, whereas the FCF represents mandated regulatory minimums. Anthropic maintains both to distinguish voluntary commitments from legal obligations.
What are catastrophic risks in AI according to SB 53?
SB 53 defines catastrophic risks as scenarios where AI models could cause death or serious injury to more than 50 people, or over $1 billion in property damage in a single incident. Specific examples include assisting in creating or deploying chemical, biological, radiological, or nuclear (CBRN) weapons; conducting cyberattacks or serious crimes without meaningful human oversight; or evading developer control by circumventing safety mechanisms.
What happens to AI companies that don’t comply with SB 53?
The California Attorney General can impose civil penalties up to $1 million per violation of SB 53 requirements. Violations include failing to publish required frameworks, missing incident reporting deadlines, providing false compliance information, or retaliating against whistleblowers. The law also permits courts to award attorney’s fees to successful whistleblower plaintiffs who prove retaliation for reporting safety concerns.
Does SB 53 apply to AI models used only internally?
Yes, SB 53 explicitly covers both public deployments and extensive internal use of frontier models. The law recognizes that catastrophic risks can emerge from models used only by company employees, not just those released to the general public. Companies must assess whether internally used models might circumvent oversight mechanisms or enable employees to cause harm and include these assessments in their compliance frameworks.
Will federal AI legislation replace SB 53 requirements?
SB 53 includes a federal deference mechanism allowing California’s Office of Emergency Services to accept equivalent federal compliance as satisfying state requirements. However, no comprehensive federal AI safety legislation currently exists as of December 2025. If Congress passes AI transparency laws with similar standards to SB 53, companies may be able to file single reports meeting both federal and California obligations, but until then, SB 53 remains the binding standard for California-based operations.
Featured Snippet Boxes
What is Anthropic’s Frontier Compliance Framework?
Anthropic’s Frontier Compliance Framework (FCF) is a public document describing how the company assesses and mitigates catastrophic risks from frontier AI models. It covers cyber offense, chemical/biological/radiological/nuclear threats, AI sabotage, and loss of control scenarios. The FCF became legally required under California’s SB 53, effective January 1, 2025.
When does California SB 53 take effect?
California’s Transparency in Frontier AI Act (SB 53) takes effect on January 1, 2025. Governor Gavin Newsom signed the law on September 29, 2025, giving AI companies approximately three months to prepare compliance frameworks.
Which AI companies must comply with SB 53?
SB 53 applies to “large frontier developers” with annual revenue exceeding $500 million and training AI models using at least 10²⁶ floating-point operations. Companies required to comply include Anthropic, OpenAI, Google DeepMind, Meta, and Nvidia.
What are catastrophic risks in AI?
Under SB 53, catastrophic risks are scenarios where AI models could cause death or serious injury to more than 50 people, or over $1 billion in property damage in a single incident. Examples include assisting in CBRN weapon development, conducting autonomous cyberattacks, or evading human control.
How does Anthropic’s FCF differ from its RSP?
Anthropic’s Responsible Scaling Policy (RSP) is a voluntary document outlining aspirational best practices that may exceed legal requirements. The Frontier Compliance Framework (FCF) is the legally binding document for SB 53 compliance. Anthropic maintains both to distinguish regulatory minimums from voluntary safety commitments.
What penalties exist for SB 53 violations?
The California Attorney General can impose civil penalties up to $1 million per SB 53 violation. The law also permits courts to award attorney’s fees to successful whistleblower plaintiffs who prove retaliation for reporting safety violations.

