On Saturday, California’s state senate gave its last thumbs up on AI safety bill, which focuses on having large corporations disclose their operational secrets.
According to State Senator Scott Wiener, “SB 53 requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs [and] creates a public cloud to expand compute access (CalCompute).”
Now, the AI Safety Bill is on the desk of California Governor Gavin Newsom, who has full discretion to either sign or veto it. Newsom’s stance on the bill is ambiguous. Still, it is known that last year, he vetoed safety legislation that Wiener authored, while signing legislation with a much narrower scope, which focused on issues such as deepfakes.
“At the very least, Newsom recognized that the public do need to be defended from the real dangers that such technology can wield, but the argument he made about Wiener’s previous bill was that it had stringent requirements because it governed the use of large models, even if such models are not used at the high risk zones where ‘critical decisions are made or sensitive data is used.”’
From what Wiener noted, it is clear that the AI expert policy panel that he spoke about was the one that recommended changes to the policy, which formed the basis of the veto that Newsom placed.
Politico also writes that SB 53 has been changed so that companies developing ‘frontier’ AI models that make under $500 million a year will only have to provide basic safety information about their systems. In comparison, those above that threshold will need to provide more extensive reports.
Many companies in Silicon Valley, VC firms, and lobbying groups have been against the bill. For example, a letter sent to Newsome by OpenAI did not specifically reference SB 53; however, it did say that, to prevent duplication and inconsistencies, companies should be allowed to be viewed as compliant with state safety regulations if they only meet federal or European requirements.
Andreessen Horowitz’s head of AI Policy and Chief Legal Officer said that ‘many of today’s state AI bills…like the ones being developed in California and New York…risk’ going too far and breaking the constitutional boundaries on how far a state can control commerce between the states.
a16z’s co-founders said that tech regulation is one of the reasons they decided to endorse Donald Trump for a second term. After that, the Trump administration, with the support of its allies, called for a 10-year AI state regulation moratorium. Anthropic has come out in support of SB 53.
In his post, Anthropic co-founder Jack Clark said, “We have long said we would prefer a federal standard, but in the absence of that, this creates a solid blueprint for AI governance that cannot be ignored.”