×
Anthropic backs California’s first AI safety law requiring transparency
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Anthropic has become the first major tech company to endorse California’s S.B. 53, a bill that would establish the first broad legal requirements for AI companies in the United States. The legislation would mandate transparency measures and safety protocols for large AI developers, transforming voluntary industry commitments into legally binding requirements that could reshape how AI companies operate nationwide.

What you should know: S.B. 53 would create mandatory transparency and safety requirements specifically targeting the most advanced AI companies.

  • The bill applies only to companies building cutting-edge models requiring massive computing power, with the strictest requirements reserved for those with annual revenues exceeding $500 million.
  • Companies would be required to publicly share safety guidelines, conduct “catastrophic risk” assessments, and establish emergency reporting systems for critical safety incidents.
  • The legislation strengthens whistleblower protections, creating pathways for employees to report severe risks that might otherwise go unreported.

The big picture: This represents a significant shift from voluntary industry commitments to mandatory legal requirements for AI safety and transparency.

  • Major AI companies including Anthropic, OpenAI, Google, and Meta have already made voluntary commitments to assess risks and implement safety measures.
  • Recent research shows AI models can help users execute cyberattacks and lower barriers to acquiring biological weapons, highlighting the need for formal oversight.
  • The bill largely codifies existing voluntary practices while adding enforcement mechanisms and public accountability.

Why this matters: California’s legislation could set a national precedent for AI regulation, given the state’s central role in AI development.

  • As home to dozens of the world’s leading AI companies, California’s regulatory approach will likely influence AI development nationally and globally.
  • The bill appears likely to pass, having received overwhelming support in both the California Assembly and Senate.
  • California’s Legislature must cast its final vote by Friday night.

What they’re saying: Anthropic praised the bill’s balanced approach to safety and competition.

  • “With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic said in a statement.
  • Sen. Scott Wiener, the bill’s sponsor, told NBC News: “Anthropic is a leader on AI safety, and we’re really grateful for the company’s support.”
  • Dan Hendrycks, executive director of the Center for AI Safety, said: “Frontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer.”

Industry pushback: Trade groups and some companies oppose the legislation, arguing it could harm U.S. competitiveness.

  • The Consumer Technology Association warned that “California SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks.”
  • OpenAI’s director of Global Affairs, Chris Lehane, reaffirmed the company’s preference for federal regulation: “America leads best with clear, nationwide rules, not a patchwork of state or local regulations.”

Learning from past failures: S.B. 53 incorporates lessons from last year’s vetoed S.B. 1047, which faced broader industry opposition.

  • The previous bill required annual third-party audits and barred developers from releasing models with “unreasonable risk” of causing critical harm.
  • Gov. Gavin Newsom vetoed S.B. 1047, saying it would “slow the pace of innovation,” though proponents argued industry lobbying influenced the decision.
  • Following the veto, Newsom formed a working group that provided recommendations incorporated into S.B. 53, emphasizing transparency over liability.

Federal vs. state regulation debate: The legislation highlights ongoing tensions between state and federal approaches to AI governance.

  • Anthropic acknowledged this tension but said federal inaction necessitates state-level action: “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.”
  • Sen. Wiener said: “Ideally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act.”
  • A recent federal spending package nearly included an amendment prohibiting states from passing AI-related legislation for 10 years, but the provision was removed in a late-night reversal.
Anthropic backs California bill that mandates AI transparency measures

Recent News

AMD driver update lets gamers force FSR 4 in 85+ titles

A simple toggle bypasses the wait for official developer support.

Notre Dame wins university’s Presidential Award for AI governance framework

The collaboration between IT and library services created a blueprint for responsible campus AI adoption.