There seems some pretty large things I disagree with in each of your arguments:
The second is a situation in which some highly capable AI that developers and users honestly think is safe or fine turns out not to be as safe as they imagined.
This seems exactly the sort of situation I want AI developers to think long and hard about. Frankly your counterexample looks like an example to me.
Autonomy seems like a primary feature of the highly capable advanced AI currently being built. None of the comparators typically used shares this feature. Surely, that should matter in any analysis.
To me, where I cannot impose costs directly on the autonomous entity, autonomy again makes strict liability better, not worse. If nuclear explosions or trains were autonomous you seem to argue we shouldn’t place strict liability on their creators. This seems the opposite of what I’d expect.
Given the interests at play, strict liability will struggle to gain traction
I do not trust almost anyone’s ability to predict this stuff. If it’s good on its merits let’s push for it. Notably Robin Hanson and some other more “risk is low” people support strict liability (because they don’t think disasters will happen). I think there is the case for a coalition around this. I don’t buy that you can predict that this will struggle.
I am interested in what bad things you think might happen with strict liability or how you think it’s gone in the past?
There seems some pretty large things I disagree with in each of your arguments:
This seems exactly the sort of situation I want AI developers to think long and hard about. Frankly your counterexample looks like an example to me.
To me, where I cannot impose costs directly on the autonomous entity, autonomy again makes strict liability better, not worse. If nuclear explosions or trains were autonomous you seem to argue we shouldn’t place strict liability on their creators. This seems the opposite of what I’d expect.
I do not trust almost anyone’s ability to predict this stuff. If it’s good on its merits let’s push for it. Notably Robin Hanson and some other more “risk is low” people support strict liability (because they don’t think disasters will happen). I think there is the case for a coalition around this. I don’t buy that you can predict that this will struggle.
I am interested in what bad things you think might happen with strict liability or how you think it’s gone in the past?