I think you are misunderstanding the bill. The key component is this phrase: “for a violation of this chapter”
I.e. this section is about what kind of damages can be recovered, if someone violates any of the procedural requirements outlined in this bill.
The bill basically has two components:
Developers have a responsibility to avoid critical harms (basically incidents that cause more than $500M in damages) and need to (at least) follow these rules to avoid them (e.g. they have to report large training runs)
If you don’t follow these rules that, the attorney general can sue you into compliance (which is what the section you quoted above is above). I.e. if you fake your safety testing results, we can sue you for that.
(I am reasonably confident this is true and have read the bill 3-4 times, but I might be getting something wrong. The bill has been changing a lot)
Thanks for your reply! I’m a bit confused—I think my understanding of the bill matches yours. The Vox article states “Otherwise, they would be liable if their AI system leads to a ‘mass casualty event’ or more than $500 million in damages in a single incident or set of closely linked incidents.” (See also eg here and here). But my reading of the bill is that there is no mass casualty/$500 million threshold for liability like Vox seems to be claiming here.
No, there is, that’s the definition of critical harm I mention above:
(g)(1)Critical harm means any of the following harms caused or materially enabled by a covered model or covered model derivative:
(A)The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
(B)Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure.
(C)Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following:
(i)Acts with limited human oversight, intervention, or supervision.
(ii)Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
(D)Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive.
I still don’t think you have posted anything from the bill which clearly shows that you only get sued if A) [you fail to follow precautions and cause critical harms], but not if B) [you fail to follow precautions the bill says are designed to prevent critical harms, and some loss of life occurs]. In both cases you could reasonably characterise it as “you fail to follow precautions the bill says are designed to prevent critical harms” and hence “violate” the “chapter”.
I mean, what you are saying is literally what I said. There are two ways the bill says the Attorney General can sue you. One, if you developed a covered model that caused more than $500M harm, two if you violated any of the prescribed transparency/accountability mechanisms in the bill.
Of course you need to have some penalty if you don’t follow the transparency/accountability requirements of the bill, how otherwise would you expect people to do any of the things the bill requires of them?
To clarify, I agree that that the ways you can be liable mostly fall into the two categories you delineate but think that your characterization of the categories might be incorrect.
You say that a developer would be liable
if you developed a covered model that caused more than $500M harm
if you violated any of the prescribed transparency/accountability mechanisms in the bill
But I think a better characterization would be that you can be liable
if you developed a covered model that caused more than $500M harm → if you fail to take reasonable care to prevent critical harms
if you violated any of the prescribed transparency/accountability mechanisms in the bill
It’s possible “to fail to take reasonable care to prevent critical harms” even if you do not cause critical harms. The bill doesn’t specify any new category of liability specifically for developers who have developed models that cause critical harm.
To use Casado’s example, if a self-driving car was involved in an accident that resulted in a person’s death, and if that self-driving car company did not “take reasonable care to prevent critical harms” by having a safety and security protocol much worse than that of other companies, it seems plausible that the company could be fined 10% of their compute/have to pay other damages. (I don’t know if self-driving cars actually would be affected by this bill.)
I think the best reason this might be wrong is that courts might not be willing to entertain this argument or that in tort law “failing to take reasonable care to avoid something” requires that you “fail to avoid that thing”—but I don’t have enough legal background/knowledge to know.
I think that’s inaccurate (though I will admit the bill text here is confusing).
Critical harms is defined as doing more than $500M of damage, so at the very least you have to be negiligent specifically on the issue of whether your systems can cause $500M of harm.
But I think more concretely the conditions under which the AG can sue for damages if no critical harm has yet occurred are pretty well-defined (and are not as broad as “fail to take reasonable care”).
I think you are misunderstanding the bill. The key component is this phrase: “for a violation of this chapter”
I.e. this section is about what kind of damages can be recovered, if someone violates any of the procedural requirements outlined in this bill.
The bill basically has two components:
Developers have a responsibility to avoid critical harms (basically incidents that cause more than $500M in damages) and need to (at least) follow these rules to avoid them (e.g. they have to report large training runs)
If you don’t follow these rules that, the attorney general can sue you into compliance (which is what the section you quoted above is above). I.e. if you fake your safety testing results, we can sue you for that.
(I am reasonably confident this is true and have read the bill 3-4 times, but I might be getting something wrong. The bill has been changing a lot)
Thanks for your reply! I’m a bit confused—I think my understanding of the bill matches yours. The Vox article states “Otherwise, they would be liable if their AI system leads to a ‘mass casualty event’ or more than $500 million in damages in a single incident or set of closely linked incidents.” (See also eg here and here). But my reading of the bill is that there is no mass casualty/$500 million threshold for liability like Vox seems to be claiming here.
No, there is, that’s the definition of critical harm I mention above:
I still don’t think you have posted anything from the bill which clearly shows that you only get sued if A) [you fail to follow precautions and cause critical harms], but not if B) [you fail to follow precautions the bill says are designed to prevent critical harms, and some loss of life occurs]. In both cases you could reasonably characterise it as “you fail to follow precautions the bill says are designed to prevent critical harms” and hence “violate” the “chapter”.
I mean, what you are saying is literally what I said. There are two ways the bill says the Attorney General can sue you. One, if you developed a covered model that caused more than $500M harm, two if you violated any of the prescribed transparency/accountability mechanisms in the bill.
Of course you need to have some penalty if you don’t follow the transparency/accountability requirements of the bill, how otherwise would you expect people to do any of the things the bill requires of them?
To clarify, I agree that that the ways you can be liable mostly fall into the two categories you delineate but think that your characterization of the categories might be incorrect.
You say that a developer would be liable
But I think a better characterization would be that you can be liable
It’s possible “to fail to take reasonable care to prevent critical harms” even if you do not cause critical harms. The bill doesn’t specify any new category of liability specifically for developers who have developed models that cause critical harm.
To use Casado’s example, if a self-driving car was involved in an accident that resulted in a person’s death, and if that self-driving car company did not “take reasonable care to prevent critical harms” by having a safety and security protocol much worse than that of other companies, it seems plausible that the company could be fined 10% of their compute/have to pay other damages. (I don’t know if self-driving cars actually would be affected by this bill.)
I think the best reason this might be wrong is that courts might not be willing to entertain this argument or that in tort law “failing to take reasonable care to avoid something” requires that you “fail to avoid that thing”—but I don’t have enough legal background/knowledge to know.
I think that’s inaccurate (though I will admit the bill text here is confusing).
Critical harms is defined as doing more than $500M of damage, so at the very least you have to be negiligent specifically on the issue of whether your systems can cause $500M of harm.
But I think more concretely the conditions under which the AG can sue for damages if no critical harm has yet occurred are pretty well-defined (and are not as broad as “fail to take reasonable care”).