Thanks, really interesting.
Good point re complying everywhere, but I think the UK example shows that countries are keen to have the AI companies’ offices in their jurisdictions, and are clearly worried that having some liability regimes would discourage that.
I don’t think we connect that part and the previous one well-enough. But anyway, it’s very hard to convince anyone that strict liability ought to be the regime in place unless you can demonstrate that the risk is super high, and can be very consequential. I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale. They can pass risk to customers via higher charges but won’t the charges have to be unusually high to mitigate against possible bankruptcy?
Yeah this is sensible. But I’m still hopeful that work like Deepmind’s recent research or Clymer et al’s recent work can help us create duties for a fault-based system that can actually not lead to a de-facto zero liability regime. Worth remembering that the standard of proof will not be perfection: So long as a judge is more convinced than not, liability would be established.