Newsom Vetoes Artificial Intelligence ‘Safety’ Bill

California governor says the bill was “well-intentioned” but too restrictive

Gavin Newsom, a man with light-toned skin, holds up a finger, gesturing off camera emphatically, in front of a wall with wood paneling
California Gov. Gavin Newsom (Photo by Justin Sullivan/Getty Images)

California Governor Gavin Newsom vetoed an artificial intelligence “safety” bill on Sunday that aimed to curtail the rapidly growing AI industry.

The bill, proposed by Democratic state Sen. Scott Weiner, mentioned “safety” 42 times and outlined several guardrails it looked to put in place.

That included “implementing the capability to promptly enact a full shutdown” of any AI model, as well as have developers “implement a written and separate safety and security protocol” that would be publicly available.

SB 1047 would also require AI developers advance “the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable.” This would be accomplished by, among other measures, “expanding access to computational resources” — although the bill doesn’t say to whom in particular.

Newsom had already hinted he would veto SB 1047 before Sunday. The Democratic governor, while speaking at Salesforce’s Dreamforce conference earlier this month, said California needs to be at the forefront of regulating AI. But SB 1047, he said, wasn’t the right way to go about it, because it would “have a chilling effect on the industry.”

Following his veto on Sunday, Newsom said the bill was “well-intentioned” but otherwise insufficient.

“SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Besides Newsom, OpenAI and Meta’s chief AI scientist Yann LeCun also recently came out against the bill.

The bill said that if AI is not “properly subject to human controls” there could be significant public safety risks. That would include rogue AI models “enabling the creation and the proliferation of weapons of mass destruction,” as well as leading cyber attacks.

On Sunday, Weiner said the veto was a “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet.”

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.