Thanks for posting this. I think this is the kind of practical, actionable analysis that we need.
Regarding this:
Given that there is still no way for model developers to deterministically guarantee a model’s expected behavior to downstream actors, and given the benefits that advanced AI could have in society, we think it is unfair for an actor to be forced to pay damages regardless of any steps they’ve taken to ensure the advanced AI in question is safe.
It seems to me that this is begging the question. If we don’t know how to make AIs safe, that is a reason not to make AIs at all, not a reason to make unsafe AIs. This is not really any different from how the nuclear power industry has been regulated out of existence in some countries[1].
Responding to the same quotation, I don’t think there is any way to structure a system for dealing with tort-like behavior that isn’t “unfair” to someone. If activity X imposes costs on third parties, our options are:
Impose those costs on the person performing activity X (strict liability);
Impose those costs on the third parties (immunity);
Impose those costs on the person performing activity X only if at fault, and on the third parties otherwise;
Require third parties to carry insurance to cover the harms (an indirect way of imposing costs on them);
Have the government pay for the harms (which makes everyone in society pay).
Each of these options is likely to be “unfair” in some applications, as someone is going to have to bear harms out of proportion to their responsibility in creating the harm. To credit the argument, I think we have to conclude that it is worse to be “unfair” to the AI companies than to the third parties.
Yes yes-I think the point we wanted to put across is what you say when you say “to credit the argument”. Strict liability here would be “unreasonably unfair” insofar as it doesn’t consider circumstances before imposing liability. I think it’s fine for a legal regime to be “unfair” to a party (for the reasons you’ve outlined) where there’s some kind of good-enough rationale. Fault-based liability would require the consideration of circumstances first.
As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)
Thanks Ian. Yes, fair point. Assuming this suggests that a comparison with nuclear power makes sense, I would say: partially. I think there’s a need to justify why that’s the comparative feature that matters most given that there are other features (for example, potential benefits to humanity at large) that might lead us to conclude that the two aren’t necessarily comparable.
Thanks for posting this. I think this is the kind of practical, actionable analysis that we need.
Regarding this:
It seems to me that this is begging the question. If we don’t know how to make AIs safe, that is a reason not to make AIs at all, not a reason to make unsafe AIs. This is not really any different from how the nuclear power industry has been regulated out of existence in some countries[1].
I think this analogy holds regardless of your opinions about the actual dangerousness of nuclear power.
Responding to the same quotation, I don’t think there is any way to structure a system for dealing with tort-like behavior that isn’t “unfair” to someone. If activity X imposes costs on third parties, our options are:
Impose those costs on the person performing activity X (strict liability);
Impose those costs on the third parties (immunity);
Impose those costs on the person performing activity X only if at fault, and on the third parties otherwise;
Require third parties to carry insurance to cover the harms (an indirect way of imposing costs on them);
Have the government pay for the harms (which makes everyone in society pay).
Each of these options is likely to be “unfair” in some applications, as someone is going to have to bear harms out of proportion to their responsibility in creating the harm. To credit the argument, I think we have to conclude that it is worse to be “unfair” to the AI companies than to the third parties.
Yes yes-I think the point we wanted to put across is what you say when you say “to credit the argument”. Strict liability here would be “unreasonably unfair” insofar as it doesn’t consider circumstances before imposing liability. I think it’s fine for a legal regime to be “unfair” to a party (for the reasons you’ve outlined) where there’s some kind of good-enough rationale. Fault-based liability would require the consideration of circumstances first.
As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)
Thanks Ian. Yes, fair point. Assuming this suggests that a comparison with nuclear power makes sense, I would say: partially. I think there’s a need to justify why that’s the comparative feature that matters most given that there are other features (for example, potential benefits to humanity at large) that might lead us to conclude that the two aren’t necessarily comparable.