A major piece of Californian regulation aimed at preventing large Artificial Intelligence systems from going rogue has been blocked, leaving the Silicon Valley industry divided.
Governor Gavin Newsom‘s recent vetoing of SB 1047 while signing into law 17 new bills on AI regulation begs the question, where does AI regulation in California currently stand?
SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aimed to establish stringent safety protocols for developers of advanced AI models to prevent potential catastrophic harms. With California being a global hub for AI innovation, the veto carries implications for the tech industry, policymakers, and the public.
A Crossroads in AI Regulation
Authored by Senator Scott Wiener, SB 1047 sought to create a regulatory framework for developers working on “frontier models”—advanced AI systems exceeding the capabilities of current models. The bill mandated comprehensive safety measures, including safety testing for AI models costing over $100 million to develop or utilizing significant computing power. It also required the implementation of a “kill switch” to shut down models in the event of dangerous or unintended consequences.
Senator Wiener expressed disappointment over the veto. “This veto is a missed opportunity for California to once again lead on innovative tech regulation—just as we did around data privacy and net neutrality—and we are all less safe as a result,” he stated on social media platform X, formerly Twitter.
He warned that without binding regulations, companies developing powerful AI models are essentially free to operate without oversight, creating a “troubling reality” for both the public and policymakers.
“We cannot afford to wait for a major catastrophe to occur before taking action,” Wiener added. He underscored the potential dangers of leaving AI safety measures in the hands of private companies, arguing that voluntary industry commitments are “not enforceable and rarely work out well for the public.”
Governor Newsom’s Justification for the Veto
In his veto statement dated September 29, 2024, Governor Newsom expressed concerns that SB 1047 was not the optimal approach to regulating AI. He argued that the bill would impose overly stringent regulations even on low-risk AI applications, potentially hindering innovation.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom said. He emphasized the need for adaptability in regulation, given the rapid pace of AI advancement, and advocated for policies informed by empirical evidence and scientific analysis.
“I do not agree… that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities,” he wrote. “Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.”
Announcing New Initiatives for AI Safety
On the same day as the veto, Governor Newsom announced a series of initiatives aimed at advancing safe and responsible AI while protecting Californians. He enlisted leading experts to help shape California’s approach to AI regulation, including:
- Dr. Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence, often referred to as the “godmother of AI.”
- Tino Cuéllar, president of the Carnegie Endowment for International Peace and a member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research.
- Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley.
“We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment. We will thoughtfully—and swiftly—work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good,” Newsom stated.
The Governor directed state agencies to expand their assessment of risks from potential catastrophic AI-related events, focusing on critical infrastructure sectors like energy, water, and communications. He also signed 17 other bills related to AI regulation in recent weeks, making California’s legislative package on AI one of the most comprehensive in the country.
Reactions to the Veto
Support from Tech Industry Leaders
The veto was welcomed by some in the tech industry who were concerned about the potential stifling of innovation. Garry Tan, CEO of startup accelerator Y Combinator, took to X to express his gratitude: “Grateful to @GavinNewsom for vetoing SB 1047 & supporting innovation in California. Huge thanks to the @ycombinator community for rallying since June to advocate for responsible AI development without stifling startups. Together, we’ll keep pushing tech forward!”
“Since June, we’ve worked hard to ensure founders’ voices were heard. We organized a town hall-style event in SF, bringing pro-open-source leaders like @linakhanFTC (Lina Khan, chair of the Federal Trade Commission) together w/ @scott_wiener. 100s of AI founders from his district joined us to discuss keeping AI open & competitive,” added Tan.
Yann LeCunn, professor at NYU and chief AI scientist at Meta, also welcomed the veto: “Thank you Governor @GavinNewsom for vetoing SB-1047. The open source AI community is grateful for your sensible decision.”
Y Combinator and other startup advocates argued that SB 1047 could impose burdensome regulations on emerging companies, potentially hindering California’s status as a global leader in technology and innovation.
Hollywood and AI Safety Advocates Disappointed
Contrastingly, a coalition of more than 125 Hollywood actors, directors, and producers had previously signed an open letter urging Governor Newsom to sign the bill. Actor Joseph Gordon-Levitt criticized the veto, stating, “AI can and will be used for so much good, but just like we’ve seen with social media, there could also be seriously damaging side effects if governments don’t lay down some rules.”
Mark Ruffalo echoed these sentiments, comparing Newsom’s veto to failed attempts to regulate industries like fossil fuels and chemicals. “This bill was unique in addressing catastrophic risks to all of us from AI models, and it sought to regulate the entire industry,” Ruffalo said.
Employees From Anthropic, Meta and More Advocated for the Bill
Adding significant weight to the call for regulation, over 100 employees from leading AI labs, including OpenAI, Google DeepMind, Anthropic, Meta, and xAI, had signed an open letter urging Governor Newsom to sign SB 1047. The signatories included prominent figures like Geoffrey Hinton, often referred to as the “Godfather of AI,” and Scott Aaronson, a renowned computer science professor.
“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” the letter stated. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”
The employees emphasized that despite inherent uncertainties in regulating advanced technology, SB 1047 represented a meaningful step forward. “We recommend that you sign SB 1047 into law,” they concluded.
Whistleblowers from OpenAI Expressed Concerns
Former employees at ChatGPT developer OpenAI, William Saunders and Daniel Kokotajlo released a separate letter in August 2024 highlighting internal concerns within one of the leading AI companies in California. They warned that companies like OpenAI are racing to build Artificial General Intelligence (AGI) without adequate safety precautions.
“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” they wrote. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”
They detailed incidents that eroded their confidence, including premature deployment of AI models, security breaches, and suppression of internal dissent. They argued that SB 1047 would create necessary public involvement in decisions around high-risk AI systems, protect whistleblowers, and hold companies accountable.
“OpenAI’s complaints about SB 1047 are not constructive and don’t seem in good faith,” they asserted. “We hope that the California Legislature and Governor Newsom will do the right thing and pass SB 1047 into law.”
Tech Industry Associations Were in Support Of a Veto
The AI Alliance, comprising major tech company members like IBM, Meta, Intel, and Oracle, were in favor of a veto. They argued that SB 1047 would slow innovation, thwart advancements in safety and security, and undermine California’s economic growth.
“The bill’s technically infeasible requirements will chill innovation in the field of AI and lower access to the field’s cutting edge, thereby directly contradicting the bill’s stated purpose,” the Alliance stated.
They also raised concerns about the bill’s impact on open-source AI development, arguing that it could penalize open-source developers by imposing obligations that are impractical for openly shared models.
California’s Path Forward in AI Regulation
Governor Newsom’s administration appears committed to developing other AI regulations through collaboration with experts and stakeholders. The involvement of AI experts like Dr. Fei-Fei Li signals a concerted effort to create a science-based, adaptable regulatory framework.
“Safe and responsible AI is essential for California’s vibrant innovation ecosystem. To effectively govern this powerful technology, we need to depend upon scientific evidence to determine how to best foster innovation and mitigate risk,” Dr. Li stated.
Jennifer Tour Chayes of UC Berkeley emphasized the importance of nurturing a robust innovation economy while fostering academic research. “This is how we’ll ensure AI benefits the most people, in the most ways, while protecting from bad actors and grave harms,” she said.
In addition to the new initiatives, Governor Newsom signed 17 other bills related to AI regulation, addressing issues such as deepfakes, AI watermarking, protecting digital likenesses, and combating AI-generated misinformation.
“We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment,” Newsom stated. “We will thoughtfully—and swiftly—work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good.”
Experts weighed in on why SB 1047 didn’t make it through. Regulatory insights leader at KPMG Amy Matsuo, posted on LinkedIn: “I believe the veto of SB 1047 is indicative of the broader challenges with AI regulation and the complexity of balancing regulatory guardrails without unduly thwarting corporate efficiency and innovation.”
Meanwhile VC investor and innovation strategist William Kilmer took to LinkedIn to call SB 1047 “a minimalist approach to AI regulation,” stating that “its limited scope is likely why Governor Gavin Newsom vetoed the bill, citing concerns that it didn’t go far enough.”
“The bill only covered AI models costing $100 million or more to build and faced opposition from tech giants like Meta, OpenAI, Alphabet Inc., and Microsoft. In comparison, it falls short of more comprehensive measures like the EU AI Act or China’s AI regulations,” he added.
Newsweek reached out to the office of Senator Scott Wiener for comment. The office of Governor Gavin Newsom declined to comment further than directing us to the official veto message released on September 29.
The post What Gavin Newsom’s AI Safety Bill Veto Means for California appeared first on Newsweek.