As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)
As I noted in another comment, I think there is a set of cases—those with low-to-moderate harm amounts—for which the realistic options are strict liability and de facto immunity. At least for that class, I find no liability to be more “unreasonably unfair” than strict liability.
Whether a fault-based liability system is viable (or instead veers close to a “no liability” approach) for other sets of cases remains an open question for me, although I remain skeptical at this time. At least the US legal system has a poor track record of managing harms from rapidly-emerging technologies in the short to medium run, so I’d need solid evidence that it will be different with this rapidly-emerging technology.
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)