×
AI legislation and concern is advancing at the state level as DC leans right in
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

State-level AI regulation is accelerating rapidly in the absence of federal action, with nearly 700 bills introduced in 2024 alone. This legislative surge reflects growing concerns about AI risks and consumer protections, ranging from comprehensive frameworks to targeted measures addressing specific harms like deepfakes. However, a proposed 10-year national moratorium out of DC threatens to halt this state-level innovation in AI governance, potentially creating regulatory gaps during a critical period of AI development and deployment.

The big picture: States are filling the federal regulatory void with diverse approaches to AI oversight, but a proposed moratorium in the budget reconciliation bill could derail these efforts for a decade.

  • Colorado‘s AI Act stands out as one of the most comprehensive state-level approaches, establishing requirements for AI system providers and deployers.
  • The sheer volume of state bills—nearly 700 in 2024 with more expected in 2025—demonstrates the urgency lawmakers feel about addressing AI risks.
  • This state-level experimentation is occurring against the backdrop of international developments like the European Union‘s AI Act, which provides a potential model for risk-based regulation.

Key details: State legislatures are focusing on specific AI use cases that present immediate risks to consumers and democratic processes.

  • Deepfake legislation has been particularly prominent, with states working to prevent the spread of deceptive AI-generated content.
  • Election integrity concerns have driven bills aimed at regulating AI’s role in political campaigns and voting processes.
  • Public sector AI use has also received significant legislative attention, with states establishing guidelines for government deployment of automated systems.

Behind the numbers: The proposed 10-year national moratorium would create significant regulatory gaps at a critical moment in AI development.

  • Without state-level protections, consumers could face increased exposure to AI-related harms like bias, discrimination, and privacy violations.
  • Businesses would navigate uncertain compliance requirements across different jurisdictions during the moratorium period.
  • State attorneys general, who have been active in enforcing existing consumer protection laws against AI harms, would see their authority curtailed.

Why this matters: The tension between state innovation and federal preemption represents a crucial governance question that will shape the future of AI in American society.

  • Effective AI regulation requires balancing innovation with protection against potential harms across diverse contexts and communities.
  • The outcome of this regulatory debate will determine who has authority to address emerging AI risks and how quickly safeguards can be implemented.
  • Without clear standards at some level of government, both consumers and businesses face uncertainty about rights, responsibilities, and remedies regarding AI systems.
States are legislating AI, but a moratorium could impact their progress

Recent News

Netflix to introduce AI-generated ads during streams by 2026

Netflix will leverage AI technology to insert advertisements during content pauses and mid-roll positions, expanding its advertising tier that has already attracted 94 million subscribers.

AI legislation and concern is advancing at the state level as DC leans right in

States are rapidly introducing legislation to govern AI use while a proposed federal moratorium threatens to halt this regulatory experimentation for a decade.

CEOs report disappointment with their CIOs’ AI capabilities

Survey finds almost 60% of chief executives doubt their CIOs have sufficient artificial intelligence knowledge to execute the ambitious AI strategies organizations need through 2026.