Lately California Senate Bill SB 1047, known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” has been in the news a lot. This bill is intended to ensure that AI technologies did not pose significant risks to society, focusing on preventing unchecked AI proliferation.
Key Provisions:
- Shutdown Control: AI developers must implement a mechanism to deactivate AI models, including all derivative models if needed.
- AI Safety Standards: The bill required developers to comply with detailed safety and risk management protocols before releasing AI systems into public use.
- Audit and Reporting: AI companies were required to disclose detailed information about their AI models’ development and deployment, as well as potential risks.
Pros:
- Preemptive Regulation: California took the lead in addressing potential AI risks, ensuring safety ahead of federal regulation. It was especially praised for prioritizing human safety in the face of increasingly powerful AI technologies.
- Global Influence: If passed, the bill would influence AI regulations worldwide, potentially setting standards for future AI policies, especially in Europe and North America.
Cons:
- Innovation Constraints: This bill is aimed to regulate advanced AI systems, especially models surpassing $100 million in training costs or 10^26 FLOPS (Explained later). Critics argued that the bill would stifle innovation by placing undue regulatory burdens on AI developers. Many feared it would particularly harm small AI startups and open-source projects, which might not have the resources to comply with the bill’s requirements.
- Impact on Open-Source AI: Concerns were raised that this law would disproportionately affect the open-source AI community, as it would be challenging to enforce a “shutdown” feature once models are publicly available and modified by third parties
- Arbitrary Selection Threshold: SB 1047 targets AI models surpassing $100 million in training costs or 10^26 FLOPS. This is flawed as these benchmarks have nothing to do with ethical policies. Training costs vary significantly based on factors like hardware efficiency, energy use, and developer expertise, making $100 million an arbitrary threshold that may exclude impactful models while unjustifiably regulating others. Similarly, the FLOPS benchmark does not necessarily reflect an AI model’s societal risk; some models with lower FLOPS may still pose significant ethical concerns. This lack of clarity would not stop harmful actors from bypassing the bill requirements.
Influence of SB 1047 on National and Global AI Policy
California’s SB 1047 will have significant ramifications beyond the state. If passed, it would set a precedent for AI regulation in the U.S. and globally. While the federal government has been slow to enact AI-specific laws, the bill’s regulatory framework could influence similar actions in Europe and Canada, which are actively exploring AI governance. For example, the European Union’s AI Act aims to create a comprehensive regulatory framework for AI systems across member states. SB 1047’s focus on safety and shutdown mechanisms could inspire similar provisions in global regulations.
At the national level, AI regulation has been slow, though President Biden’s executive order issued earlier in 2024 aims to create federal standards for AI safety. California’s SB 1047 could become a blueprint for federal AI policy, especially regarding high-risk AI models. This will get heated post elections next month.
Supporters and Opposition:
Senator Scott Wiener spearheaded the bill. He positioned it as an essential step to ensure that AI development does not outpace safety considerations. The bill garnered support from various AI ethics advocates and organizations that fear unregulated AI could pose existential threats. Musk, a longtime advocate of AI regulation, supports the bill, stating that safety should be prioritized similarly to other technologies with public risks.
However, major tech companies like Google, Microsoft, and Meta, along with prominent AI scientist Yann LeCun, opposed the bill, citing fears that it would dampen innovation, increase costs, and hamper competitiveness in the global AI landscape. arguing that such regulations could stifle innovation, particularly affecting smaller companies and the open-source community. Critics claim that imposing stringent controls might slow AI advancements and push innovation out of California.
Current Status and Next Steps:
Governor Gavin Newsom vetoed the bill in late September 2024, explaining that while the bill addressed real concerns, its stringent provisions might inadvertently hurt California’s position as a global AI innovation hub. He expressed interest in finding a balanced approach that would both safeguard innovation and ensure ethical AI practices.
SB 1047’s journey illustrates the tension between AI innovation and safety. Whether through amendments or alternative bills, California’s efforts will likely shape the broader landscape of AI regulation.





