Good point re complying everywhere, but I think the UK example shows that countries are keen to have the AI companies’ offices in their jurisdictions, and are clearly worried that having some liability regimes would discourage that.
I don’t think we connect that part and the previous one well-enough. But anyway, it’s very hard to convince anyone that strict liability ought to be the regime in place unless you can demonstrate that the risk is super high, and can be very consequential. I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale. They can pass risk to customers via higher charges but won’t the charges have to be unusually high to mitigate against possible bankruptcy?
having some liability regimes would discourage that.
Logically, that should only be the case if the firm is exposed to more liability from locating in that jurisdiction rather than an alternative jurisdiction. If the jurisdiction’s choice of law rule is “apply the liability rules of the jurisdiction where the harm occurred,” I don’t see how that is appreciably worse for the AI company. If they have assets and/or business in the country where the harm occurred—or any country that will enforce that country’s court judgments—they are going to be vulnerable to judgment issued by that country’s courts. I’m not up to date on who will enforce whose judgments, but exiting the US or EU would be a massive cost for any AI company. There are other countries for which exiting would be a major commercial disadvantage.
I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale.
The U.S. at least has imposed no-fault compensation regimes where political and/or other realities were seen to warrant it, although their setup is admittedly different. The two that come immediately to mind are the so-called “vaccine court” and the worker’s compensation system.
So it can be done; the question is whether the political will to do it exists. (I do agree that it won’t happen through expansion of common-law doctrine.) My own view is that there’s a decent chance that the political will comes into existence once people realize that the practical alternative in many cases is de facto immunity for the AI companies. And I think that’s where the crux largely is.
Thanks, really interesting.
Good point re complying everywhere, but I think the UK example shows that countries are keen to have the AI companies’ offices in their jurisdictions, and are clearly worried that having some liability regimes would discourage that.
I don’t think we connect that part and the previous one well-enough. But anyway, it’s very hard to convince anyone that strict liability ought to be the regime in place unless you can demonstrate that the risk is super high, and can be very consequential. I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale. They can pass risk to customers via higher charges but won’t the charges have to be unusually high to mitigate against possible bankruptcy?
Logically, that should only be the case if the firm is exposed to more liability from locating in that jurisdiction rather than an alternative jurisdiction. If the jurisdiction’s choice of law rule is “apply the liability rules of the jurisdiction where the harm occurred,” I don’t see how that is appreciably worse for the AI company. If they have assets and/or business in the country where the harm occurred—or any country that will enforce that country’s court judgments—they are going to be vulnerable to judgment issued by that country’s courts. I’m not up to date on who will enforce whose judgments, but exiting the US or EU would be a massive cost for any AI company. There are other countries for which exiting would be a major commercial disadvantage.
The U.S. at least has imposed no-fault compensation regimes where political and/or other realities were seen to warrant it, although their setup is admittedly different. The two that come immediately to mind are the so-called “vaccine court” and the worker’s compensation system.
So it can be done; the question is whether the political will to do it exists. (I do agree that it won’t happen through expansion of common-law doctrine.) My own view is that there’s a decent chance that the political will comes into existence once people realize that the practical alternative in many cases is de facto immunity for the AI companies. And I think that’s where the crux largely is.