I mean, what you are saying is literally what I said. There are two ways the bill says the Attorney General can sue you. One, if you developed a covered model that caused more than $500M harm, two if you violated any of the prescribed transparency/accountability mechanisms in the bill.
Of course you need to have some penalty if you don’t follow the transparency/accountability requirements of the bill, how otherwise would you expect people to do any of the things the bill requires of them?
To clarify, I agree that that the ways you can be liable mostly fall into the two categories you delineate but think that your characterization of the categories might be incorrect.
You say that a developer would be liable
if you developed a covered model that caused more than $500M harm
if you violated any of the prescribed transparency/accountability mechanisms in the bill
But I think a better characterization would be that you can be liable
if you developed a covered model that caused more than $500M harm → if you fail to take reasonable care to prevent critical harms
if you violated any of the prescribed transparency/accountability mechanisms in the bill
It’s possible “to fail to take reasonable care to prevent critical harms” even if you do not cause critical harms. The bill doesn’t specify any new category of liability specifically for developers who have developed models that cause critical harm.
To use Casado’s example, if a self-driving car was involved in an accident that resulted in a person’s death, and if that self-driving car company did not “take reasonable care to prevent critical harms” by having a safety and security protocol much worse than that of other companies, it seems plausible that the company could be fined 10% of their compute/have to pay other damages. (I don’t know if self-driving cars actually would be affected by this bill.)
I think the best reason this might be wrong is that courts might not be willing to entertain this argument or that in tort law “failing to take reasonable care to avoid something” requires that you “fail to avoid that thing”—but I don’t have enough legal background/knowledge to know.
I think that’s inaccurate (though I will admit the bill text here is confusing).
Critical harms is defined as doing more than $500M of damage, so at the very least you have to be negiligent specifically on the issue of whether your systems can cause $500M of harm.
But I think more concretely the conditions under which the AG can sue for damages if no critical harm has yet occurred are pretty well-defined (and are not as broad as “fail to take reasonable care”).
I mean, what you are saying is literally what I said. There are two ways the bill says the Attorney General can sue you. One, if you developed a covered model that caused more than $500M harm, two if you violated any of the prescribed transparency/accountability mechanisms in the bill.
Of course you need to have some penalty if you don’t follow the transparency/accountability requirements of the bill, how otherwise would you expect people to do any of the things the bill requires of them?
To clarify, I agree that that the ways you can be liable mostly fall into the two categories you delineate but think that your characterization of the categories might be incorrect.
You say that a developer would be liable
But I think a better characterization would be that you can be liable
It’s possible “to fail to take reasonable care to prevent critical harms” even if you do not cause critical harms. The bill doesn’t specify any new category of liability specifically for developers who have developed models that cause critical harm.
To use Casado’s example, if a self-driving car was involved in an accident that resulted in a person’s death, and if that self-driving car company did not “take reasonable care to prevent critical harms” by having a safety and security protocol much worse than that of other companies, it seems plausible that the company could be fined 10% of their compute/have to pay other damages. (I don’t know if self-driving cars actually would be affected by this bill.)
I think the best reason this might be wrong is that courts might not be willing to entertain this argument or that in tort law “failing to take reasonable care to avoid something” requires that you “fail to avoid that thing”—but I don’t have enough legal background/knowledge to know.
I think that’s inaccurate (though I will admit the bill text here is confusing).
Critical harms is defined as doing more than $500M of damage, so at the very least you have to be negiligent specifically on the issue of whether your systems can cause $500M of harm.
But I think more concretely the conditions under which the AG can sue for damages if no critical harm has yet occurred are pretty well-defined (and are not as broad as “fail to take reasonable care”).