Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)
Thanks for the references. The liability system needs to cover AI harms that are not catastrophes, including the stuff that goes by “AI ethics” more than “AI safety.” Indeed, those are the kinds of harms that are likely more legible to the public and will drive public support for liability rules.
(In the US system, that will often be a jury of laypersons deciding any proof issues, by the way. In the federal system at least, that rule has a constitutional basis and isn’t changeable by ordinary legislation.)