There are many flaws in the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) that’s headed to the California Governor’s office for his signature. Flaws that will no doubt hurt AI companies, both in lost productivity and unnecessary expense as well as in giving a competitive edge to Chinese AI companies that operate without such onerous controls.
However, the single biggest flaw is the one having to do with “advanced persistent threats and other sophisticated actors.”
Section 22603.
(a) Before a developer initially trains a covered model, the developer shall do all of the following:
(1) Implement administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.
Failure to comply with the above, as well as all of the other requirements in the bill, and there are far too many, would be an “unlawful act” subject to monetary penalties.
(1) A civil penalty for a violation that occurs on or after January 1, 2026, in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.
(2) (A) Injunctive or declaratory relief, including, but not limited to, orders to modify, implement a full shutdown, or delete the covered model and any covered model derivatives controlled by the developer.
(B) The court may only order relief under this paragraph for a covered model that has caused death or bodily harm to another human, harm to property, theft or misappropriation of property, or constitutes an imminent risk or threat to public safety.
(3) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(4) Attorney’s fees and costs.
(5) Any other relief that the court deems appropriate.
What’s The Problem?
In spite of what the cybersecurity industry wants you to believe, there is no way to keep a patient and dedicated adversary out of a network, and there are two reasons why:
software
human beings
Both are vulnerable to being exploited, and for the most secure systems, it’s almost always a combination of both.
The Russia-Ukraine war has shown time and again that even the most heavily fortified networks can be cracked with nothing more than patience and ingenuity, and maybe a little money.
And it’s not just Russia.
It can and does happen to the best protected agencies, financial institutions, and defense organizations in the world. It’s the price we pay for all of the benefits that software and cloud computing has brought us. We are more productive, and more vulnerable, than ever.
This bill puts the onus for keeping sophisticated bad actors out of an AI computing cluster on the AI company, without regard for the myriad of vendors and suppliers that make up the AI company’s supply chain.
For example, starting in 2028 the bill requires that the AI company hire a third party auditor do an annual audit of compliance with all of the provisions of SB1047. Once hired, the auditor becomes part of the company’s supply chain.
The auditing firm would be a perfect candidate for an adversary to infiltrate as the first stage of an attack against the AI company. Who at the firm isn’t going to open an attachment in an email sent by its own auditor? One click and you’ve been compromised, and now subject to heavy penalties.
Even worse, while the AI company will be held responsible, the cybersecurity company whose product didn’t stop the intruders will continue to evade any responsibility, just as it always has, thanks to the EULA that the customer signs.
“NEITHER PARTY SHALL BE LIABLE TO THE OTHER PARTY IN CONNECTION WITH THIS AGREEMENT OR THE SUBJECT MATTER HEREOF (UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STATUTE, TORT OR OTHERWISE) FOR ANY LOST PROFITS, REVENUE, OR SAVINGS, LOST BUSINESS OPPORTUNITIES, LOST DATA, OR SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES.”
And this one is from Microsoft’s Defender product:
“DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF USING IT.”
Summary
This approach is “bass-ackwards.”
The first step in regulating AI is regulating software vendors. This is 40 years overdue and it must happen at the federal level.
Once that’s in place, an AI bill need only address common abuse issues as well as low probablilty—high risk events.
A Major Flaw in SP1047 re APTs and Sophisticated Threat Actors
There are many flaws in the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) that’s headed to the California Governor’s office for his signature. Flaws that will no doubt hurt AI companies, both in lost productivity and unnecessary expense as well as in giving a competitive edge to Chinese AI companies that operate without such onerous controls.
However, the single biggest flaw is the one having to do with “advanced persistent threats and other sophisticated actors.”
Section 22603.
(a) Before a developer initially trains a covered model, the developer shall do all of the following:
(1) Implement administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.
Failure to comply with the above, as well as all of the other requirements in the bill, and there are far too many, would be an “unlawful act” subject to monetary penalties.
(1) A civil penalty for a violation that occurs on or after January 1, 2026, in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.
(2) (A) Injunctive or declaratory relief, including, but not limited to, orders to modify, implement a full shutdown, or delete the covered model and any covered model derivatives controlled by the developer.
(B) The court may only order relief under this paragraph for a covered model that has caused death or bodily harm to another human, harm to property, theft or misappropriation of property, or constitutes an imminent risk or threat to public safety.
(3) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(4) Attorney’s fees and costs.
(5) Any other relief that the court deems appropriate.
What’s The Problem?
In spite of what the cybersecurity industry wants you to believe, there is no way to keep a patient and dedicated adversary out of a network, and there are two reasons why:
software
human beings
Both are vulnerable to being exploited, and for the most secure systems, it’s almost always a combination of both.
The Russia-Ukraine war has shown time and again that even the most heavily fortified networks can be cracked with nothing more than patience and ingenuity, and maybe a little money.
And it’s not just Russia.
It can and does happen to the best protected agencies, financial institutions, and defense organizations in the world. It’s the price we pay for all of the benefits that software and cloud computing has brought us. We are more productive, and more vulnerable, than ever.
This bill puts the onus for keeping sophisticated bad actors out of an AI computing cluster on the AI company, without regard for the myriad of vendors and suppliers that make up the AI company’s supply chain.
For example, starting in 2028 the bill requires that the AI company hire a third party auditor do an annual audit of compliance with all of the provisions of SB1047. Once hired, the auditor becomes part of the company’s supply chain.
The auditing firm would be a perfect candidate for an adversary to infiltrate as the first stage of an attack against the AI company. Who at the firm isn’t going to open an attachment in an email sent by its own auditor? One click and you’ve been compromised, and now subject to heavy penalties.
Even worse, while the AI company will be held responsible, the cybersecurity company whose product didn’t stop the intruders will continue to evade any responsibility, just as it always has, thanks to the EULA that the customer signs.
For example, this section comes from Crowdstrike’s Terms and Conditions:
“NEITHER PARTY SHALL BE LIABLE TO THE OTHER PARTY IN CONNECTION WITH THIS AGREEMENT OR THE SUBJECT MATTER HEREOF (UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STATUTE, TORT OR OTHERWISE) FOR ANY LOST PROFITS, REVENUE, OR SAVINGS, LOST BUSINESS OPPORTUNITIES, LOST DATA, OR SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES.”
And this one is from Microsoft’s Defender product:
“DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF USING IT.”
Summary
This approach is “bass-ackwards.”
The first step in regulating AI is regulating software vendors. This is 40 years overdue and it must happen at the federal level.
Once that’s in place, an AI bill need only address common abuse issues as well as low probablilty—high risk events.