Responding to the same quotation, I don’t think there is any way to structure a system for dealing with tort-like behavior that isn’t “unfair” to someone. If activity X imposes costs on third parties, our options are:
Impose those costs on the person performing activity X (strict liability);
Impose those costs on the third parties (immunity);
Impose those costs on the person performing activity X only if at fault, and on the third parties otherwise;
Require third parties to carry insurance to cover the harms (an indirect way of imposing costs on them);
Have the government pay for the harms (which makes everyone in society pay).
Each of these options is likely to be “unfair” in some applications, as someone is going to have to bear harms out of proportion to their responsibility in creating the harm. To credit the argument, I think we have to conclude that it is worse to be “unfair” to the AI companies than to the third parties.
Yes yes-I think the point we wanted to put across is what you say when you say “to credit the argument”. Strict liability here would be “unreasonably unfair” insofar as it doesn’t consider circumstances before imposing liability. I think it’s fine for a legal regime to be “unfair” to a party (for the reasons you’ve outlined) where there’s some kind of good-enough rationale. Fault-based liability would require the consideration of circumstances first.
As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)
Responding to the same quotation, I don’t think there is any way to structure a system for dealing with tort-like behavior that isn’t “unfair” to someone. If activity X imposes costs on third parties, our options are:
Impose those costs on the person performing activity X (strict liability);
Impose those costs on the third parties (immunity);
Impose those costs on the person performing activity X only if at fault, and on the third parties otherwise;
Require third parties to carry insurance to cover the harms (an indirect way of imposing costs on them);
Have the government pay for the harms (which makes everyone in society pay).
Each of these options is likely to be “unfair” in some applications, as someone is going to have to bear harms out of proportion to their responsibility in creating the harm. To credit the argument, I think we have to conclude that it is worse to be “unfair” to the AI companies than to the third parties.
Yes yes-I think the point we wanted to put across is what you say when you say “to credit the argument”. Strict liability here would be “unreasonably unfair” insofar as it doesn’t consider circumstances before imposing liability. I think it’s fine for a legal regime to be “unfair” to a party (for the reasons you’ve outlined) where there’s some kind of good-enough rationale. Fault-based liability would require the consideration of circumstances first.
As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)