We all know by now that countries like the US and China are in a rush to become global leaders in AI development and use due to the economic and military edge that advanced AI could give them. This ambition is destined to shape their AI regulation decisions.
I hear that, but AI liability regulation is presumably going to be governed by the country in which the harm occurs. If you’re a Chinese AI company and want to make a ton of money, you’re going to have to operate in the US, EU, and other highly developed countries. To quote Willie Sutton (allegedly) when asked why he robbed banks: “Because that’s where the money is.” That means that you’re going to have to comply with those countries’ standards when operating in them. It’s not clear why a rule that governs liabilities for harms committed in the US would have a significantly greater impact on US firms than on Chinese ones.
I would agree insofar as (e.g.) the US should not allow people harmed by AI in China to sue in the US under more favorable standards than China would apply. That would disadvantage U.S. companies—who would be held to U.S. standards of liability for their harms in China—while Chinese companies would be held to lower standards (because they would generally not be amenable to personal jurisdiction in the U.S. for non-U.S. harms).
The other reason that strict liability is unlikely to gain traction is the fact that there’s still no expert consensus on how high the risk posed by highly capable advanced AI is.
I don’t see a clear connection here. The liability system is not meant to directly prevent (or compensate for) global catastrophic risks. In the event of a catastrophic event, the offending AI company is toast (hopefully the rest of us aren’t). It will be shut down irrespective of the outcome of any litigation about its fault.
It is true that EAs’ motivation for imposing heavier liability on AI companies is related to GCRs, but there’s no reason that has to be the chosen policy rationale. Something as mundane as “They are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices” could do the trick.
Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices—but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)
The biggest fish—which I assume are the ones you are going to be most worried about from a GCR perspective—are very likely to self-insure.
I’m also less confident in insurers’ abilities to insist on and monitor risk from AI development than risk exposure from application of AI. For instance, it seems a lot easier for a third party (who knows much less about AI systems than the insured) to figure out “You shouldn’t let AI determine the results of that CT scan without a human overread” than “You shouldn’t use technique X to grow your AI technology.”
Good point re complying everywhere, but I think the UK example shows that countries are keen to have the AI companies’ offices in their jurisdictions, and are clearly worried that having some liability regimes would discourage that.
I don’t think we connect that part and the previous one well-enough. But anyway, it’s very hard to convince anyone that strict liability ought to be the regime in place unless you can demonstrate that the risk is super high, and can be very consequential. I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale. They can pass risk to customers via higher charges but won’t the charges have to be unusually high to mitigate against possible bankruptcy?
having some liability regimes would discourage that.
Logically, that should only be the case if the firm is exposed to more liability from locating in that jurisdiction rather than an alternative jurisdiction. If the jurisdiction’s choice of law rule is “apply the liability rules of the jurisdiction where the harm occurred,” I don’t see how that is appreciably worse for the AI company. If they have assets and/or business in the country where the harm occurred—or any country that will enforce that country’s court judgments—they are going to be vulnerable to judgment issued by that country’s courts. I’m not up to date on who will enforce whose judgments, but exiting the US or EU would be a massive cost for any AI company. There are other countries for which exiting would be a major commercial disadvantage.
I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale.
The U.S. at least has imposed no-fault compensation regimes where political and/or other realities were seen to warrant it, although their setup is admittedly different. The two that come immediately to mind are the so-called “vaccine court” and the worker’s compensation system.
So it can be done; the question is whether the political will to do it exists. (I do agree that it won’t happen through expansion of common-law doctrine.) My own view is that there’s a decent chance that the political will comes into existence once people realize that the practical alternative in many cases is de facto immunity for the AI companies. And I think that’s where the crux largely is.
[on argument 3]
I hear that, but AI liability regulation is presumably going to be governed by the country in which the harm occurs. If you’re a Chinese AI company and want to make a ton of money, you’re going to have to operate in the US, EU, and other highly developed countries. To quote Willie Sutton (allegedly) when asked why he robbed banks: “Because that’s where the money is.” That means that you’re going to have to comply with those countries’ standards when operating in them. It’s not clear why a rule that governs liabilities for harms committed in the US would have a significantly greater impact on US firms than on Chinese ones.
I would agree insofar as (e.g.) the US should not allow people harmed by AI in China to sue in the US under more favorable standards than China would apply. That would disadvantage U.S. companies—who would be held to U.S. standards of liability for their harms in China—while Chinese companies would be held to lower standards (because they would generally not be amenable to personal jurisdiction in the U.S. for non-U.S. harms).
I don’t see a clear connection here. The liability system is not meant to directly prevent (or compensate for) global catastrophic risks. In the event of a catastrophic event, the offending AI company is toast (hopefully the rest of us aren’t). It will be shut down irrespective of the outcome of any litigation about its fault.
It is true that EAs’ motivation for imposing heavier liability on AI companies is related to GCRs, but there’s no reason that has to be the chosen policy rationale. Something as mundane as “They are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices” could do the trick.
Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices—but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)
The biggest fish—which I assume are the ones you are going to be most worried about from a GCR perspective—are very likely to self-insure.
I’m also less confident in insurers’ abilities to insist on and monitor risk from AI development than risk exposure from application of AI. For instance, it seems a lot easier for a third party (who knows much less about AI systems than the insured) to figure out “You shouldn’t let AI determine the results of that CT scan without a human overread” than “You shouldn’t use technique X to grow your AI technology.”
Thanks, really interesting.
Good point re complying everywhere, but I think the UK example shows that countries are keen to have the AI companies’ offices in their jurisdictions, and are clearly worried that having some liability regimes would discourage that.
I don’t think we connect that part and the previous one well-enough. But anyway, it’s very hard to convince anyone that strict liability ought to be the regime in place unless you can demonstrate that the risk is super high, and can be very consequential. I can’t see how your alternative works because, well, I haven’t seen any other scenario so far where strict liability has been applied on that rationale. They can pass risk to customers via higher charges but won’t the charges have to be unusually high to mitigate against possible bankruptcy?
Logically, that should only be the case if the firm is exposed to more liability from locating in that jurisdiction rather than an alternative jurisdiction. If the jurisdiction’s choice of law rule is “apply the liability rules of the jurisdiction where the harm occurred,” I don’t see how that is appreciably worse for the AI company. If they have assets and/or business in the country where the harm occurred—or any country that will enforce that country’s court judgments—they are going to be vulnerable to judgment issued by that country’s courts. I’m not up to date on who will enforce whose judgments, but exiting the US or EU would be a massive cost for any AI company. There are other countries for which exiting would be a major commercial disadvantage.
The U.S. at least has imposed no-fault compensation regimes where political and/or other realities were seen to warrant it, although their setup is admittedly different. The two that come immediately to mind are the so-called “vaccine court” and the worker’s compensation system.
So it can be done; the question is whether the political will to do it exists. (I do agree that it won’t happen through expansion of common-law doctrine.) My own view is that there’s a decent chance that the political will comes into existence once people realize that the practical alternative in many cases is de facto immunity for the AI companies. And I think that’s where the crux largely is.