My impression is that SB 53 is almost good, but there is one bit that makes me worry it could end up being very harmful. Specifically this part:
Every year starting in 2030, a large AI developer must get an independent auditor to verify that (1) they are following their own safety policy, and (2) the safety policy is clear enough that it’s possible to determine whether the developer is following it (§ 22757.14).
By 2030, we may already be dead, or we may have turned over control to AI to an extent that puts us on an inevitable track toward death, or perhaps death is not yet inevitable but AI has nonetheless changed the political landscape to such an extent that this rule is meaningless.
My concern is that a law requiring audits by 2030 may prevent us from getting a law that requires audits sooner than that. I would much rather require audits by 2026, or 2027 at the latest.
How likely is it that this law would prevent us from getting safety regulations that come into effect sooner? That seems like an important question to answer to determine whether SB 53 is net positive.
The section of SB 53 that talks about external auditing was added to the bill by the Assembly Committee on Privacy and Consumer Protection. They wrote that the purpose of the four year grace period before auditing becomes mandatory is “to give time for the nascent industry of AI auditors to grow.” Now, I don’t think the auditing industry needs that much time. Deloitte has already helped Anthropic to audit Claude 4, and I suspect the other Big Four firms will get involved soon. They can pull in AI experts from RAND, METR, or AISI if they need to.
It’s worth noting that even if the relevant parts of SB 53 pass unamended, some other state or the federal government could still pass an external auditing requirement that kicks in before 2030. I don’t see an obvious reason why passing SB 53 makes it less likely that such a law passes in a jurisdiction other than CA.
The solution to the problem of AI developers choosing lax auditors is § 22757.16.b. The bill says that if an auditor is negligent or “knowingly include[s] a material misrepresentation or omit[s] a material fact” in their report to the AG, they’re civilly liable for up to $10k in fines. Now, I think that penalty figure is probably too low, but if you raise it enough, it will solve the incentive problem. Auditors won’t go easy on AI developers because they know they can be fined if they do.
CalCompute’s effect might indeed be somewhat accelerationist. FWIW, all that SB 53 does is appoint a board to explore setting up CalCompute. The bill does not appropriate funds for a new cluster. Given how many hurdles CalCompute would still have to clear even if SB 53 passed, I don’t think it should drive our net assessment of whether SB 53 is good or bad.
I don’t see an obvious reason why passing SB 53 makes it less likely that such a law passes in a jurisdiction other than CA.
I was thinking policy-makers might see that there’s already an auditing requirement and decide not to impose another auditing requirement because it doesn’t seem important anymore. (Even though on my view it would still be important to get a requirement that comes into effect sooner.) I don’t know whether policy-makers are likely to think that way, it just seems like a possibility that’s worthy of concern.
California will explore building a public AI compute cluster to support socially beneficial AI research and innovation (§ 11546.8).
I expect this part is weakly net harmful because it shortens timelines a bit by increasing demand for AI. But it’s not a big enough deal for me to care too much about.
My second concern is that auditors will end up being toothless because companies will look for auditors who will give them a passing grade even though they don’t deserve to pass. I don’t know how to fix this*, and I still think mandating audits is better than not mandating audits.
*Well I have some unrealistic ideas about how to fix it, like “any group conducting audits must be approved by the Machine Intelligence Research Institute.”
My impression is that SB 53 is almost good, but there is one bit that makes me worry it could end up being very harmful. Specifically this part:
By 2030, we may already be dead, or we may have turned over control to AI to an extent that puts us on an inevitable track toward death, or perhaps death is not yet inevitable but AI has nonetheless changed the political landscape to such an extent that this rule is meaningless.
My concern is that a law requiring audits by 2030 may prevent us from getting a law that requires audits sooner than that. I would much rather require audits by 2026, or 2027 at the latest.
How likely is it that this law would prevent us from getting safety regulations that come into effect sooner? That seems like an important question to answer to determine whether SB 53 is net positive.
Thanks for your comments, Michael.
The section of SB 53 that talks about external auditing was added to the bill by the Assembly Committee on Privacy and Consumer Protection. They wrote that the purpose of the four year grace period before auditing becomes mandatory is “to give time for the nascent industry of AI auditors to grow.” Now, I don’t think the auditing industry needs that much time. Deloitte has already helped Anthropic to audit Claude 4, and I suspect the other Big Four firms will get involved soon. They can pull in AI experts from RAND, METR, or AISI if they need to.
It’s worth noting that even if the relevant parts of SB 53 pass unamended, some other state or the federal government could still pass an external auditing requirement that kicks in before 2030. I don’t see an obvious reason why passing SB 53 makes it less likely that such a law passes in a jurisdiction other than CA.
The solution to the problem of AI developers choosing lax auditors is § 22757.16.b. The bill says that if an auditor is negligent or “knowingly include[s] a material misrepresentation or omit[s] a material fact” in their report to the AG, they’re civilly liable for up to $10k in fines. Now, I think that penalty figure is probably too low, but if you raise it enough, it will solve the incentive problem. Auditors won’t go easy on AI developers because they know they can be fined if they do.
CalCompute’s effect might indeed be somewhat accelerationist. FWIW, all that SB 53 does is appoint a board to explore setting up CalCompute. The bill does not appropriate funds for a new cluster. Given how many hurdles CalCompute would still have to clear even if SB 53 passed, I don’t think it should drive our net assessment of whether SB 53 is good or bad.
I was thinking policy-makers might see that there’s already an auditing requirement and decide not to impose another auditing requirement because it doesn’t seem important anymore. (Even though on my view it would still be important to get a requirement that comes into effect sooner.) I don’t know whether policy-makers are likely to think that way, it just seems like a possibility that’s worthy of concern.
Two more minor concerns:
I expect this part is weakly net harmful because it shortens timelines a bit by increasing demand for AI. But it’s not a big enough deal for me to care too much about.
My second concern is that auditors will end up being toothless because companies will look for auditors who will give them a passing grade even though they don’t deserve to pass. I don’t know how to fix this*, and I still think mandating audits is better than not mandating audits.
*Well I have some unrealistic ideas about how to fix it, like “any group conducting audits must be approved by the Machine Intelligence Research Institute.”