As an initial note, I don’t think my proposed model is that different from strict liability for small-to-midsize harms. But framing it in insurance terms is more politically palatable, and also allowed me to riff off the analogy between aviation law and automobile law to explain why I didn’t think the aviation-law analogy worked in a lot of cases. I didn’t intend to suggest a solution for the entire scope of AI liability issues. My focus probably comes as much from my professional interest in thinking about how modest-dollar disputes can be effectively litigated as from anything else.
That being said, I think that small/mid-size disputes would collectively be an important source of liability that would encourage more careful development and frankly would slow down development a bit. Their litigation would also bring more public attention to the harms, and provide grist for development of the common law as to AI harms. (If few people sue because proving fault is too expensive, there will be little development in caselaw.)
I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease.
Health insurance is to a large extent sui generis because it lacks many of the classic features of insurance. The insured = the beneficiary, and often has substantial control over whether and how much “loss” to incur. For instance, I could decide to ignore a ~mild skin condition, use cheap last-generation remedies, or seek coverage of newfangled stuff at $900/bottle (all paid by insurance and manufacturer coupon).
Furthermore, for public policy reasons, we won’t let the insurer react against the insured for claims history. In contrast, consider condo insurance—after my condo association had a water-leak claim, our deductible doubled and our water-damage deductible went up five-fold. I told the other unitowners that we could expect to ~lose water-damage coverage if we filed another such claim in the next few years.
In contrast, you don’t see these pathologies as much in, e.g., automobile insurance (the party bearing the loss often isn’t the insured, the insured does not often “choose” the loss in the same way, and insurance companies can and will raise rates and dump risky clients altogether).
And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents.
I’m not confident of this as an empirical matter—do you have a citation?
First off, under my (limited) model, fault would not be an issue so there would be no litigating that. Causation and injury could be litigated, but the potential causal and injurious mechanics for AI harm are manifold. So you probably wouldn’t get the same degree of deep-pockets motivation as in (say) asbestos or tobacco cases, where the causation questions could be common to tens of thousands of suits.
Next, as a practicing lawyer, my experience is that repeat litigants want to bury/settle cases that would risk establishing bad precedent. They are also less likely to appeal than a one-off litigant. A prime example is the U.S. Government. Let’s say the district court decision decides you can sue the Government for X. That decision only controls the outcome of that specific case and maybe that specific litigant; it isn’t even binding on the same judge in a different case. You take that case to the court of appeals and lose, and now everyone in a multi-state region knows they can sue the government for X. They don’t have to invest in litigating whether the government is immune from suit for X before getting to the merits of their case. This is why appeals by the Government require the approval of the number 3 in the whole Department of Justice (the Solicitor General).
Finally, I think what you describe is unavoidable in any scenario where a repeat litigant has a large financial stake in certain issues. If you push a ton of liability on OpenAI (and it doesn’t pass that onto an insurer), it will have ~the same incentives that an insurer has in your model.
I also don’t think that it matters for tail risks—unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them.
Right. I agree with you that regulation—or pre-emptive, injunctive litigation—are the viable paths for controlling risks that would bankrupt the company. You’re generally right that injunctive relief is not generally available when money damages would be seen as compensating for any harm, and the financial ability of the risk-creating party to pay those damages may often not get much weight. See, e.g., Brown et al v. Sec’y, HHS, No. 20-14210 (11th Cir. July 14, 2021). You could fix that by statute, though.
One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action [ . . . .]
From a theoretical perspective, one reason that preliminary injunctions are an extraordinary remedy is that there is often little practical way to compensate the enjoined party if it is later determined that they were wrongly enjoined. To do much good, your proposal would have to be done on an expedited basis. But if the standard is not demanding, there are going to be a lot of injunctions issued that will be ultimately judged to be erroneous on further review. My guess is that you will not be too troubled by that, but a lot of people will.
Other issues with injunctions to resolve:
There is a well-known problem where a lot of people across the nation have standing to bring a lawsuit. They can all bring their own lawsuit, the scope of relief is necessarily nationwide, and they only have to convince one district judge (out of several hundred) that an injunction is warranted. There’s appellate review, but it is not quick and certain aspects are deferential (factual findings reviewed for clear error, weighing reviewed for abuse of discretion). This is why you often see major U.S. government policies enjoined up front, the conservatives know which districts and divisions to file in (often in Texas) and so do the progressives (often D.C.). This could be fixed by centralizing such petitions in one court and consolidating cases.
There’s also an international element of the same. If the AI development activity is happening in the US, should any country be able to enjoin it? In other words, should any country get a legal veto on AI activity anywhere in the world? This would raise a foreign-relations nightmare. It is well-known that the judiciary in many countries (including several major world powers) is not meaningfully independent, and even where it largely is there could be political influence. The temptation to enjoin other countries’ AI companies would be strong. This seems very hard to fix.
My guess is that the U.S. would pass a federal statute preventing enforcement of foreign injunctions in the U.S., and maybe allowing the President to take retaliatory action against foreign AI companies if they deemed the anti-U.S. company action to be motivated by strategic concerns.
If your answer is some sort of world court, there is still a real problem of enforcement by a home-country government that doesn’t want to enforce the injunction against its own interests.
As an initial note, I don’t think my proposed model is that different from strict liability for small-to-midsize harms. But framing it in insurance terms is more politically palatable, and also allowed me to riff off the analogy between aviation law and automobile law to explain why I didn’t think the aviation-law analogy worked in a lot of cases. I didn’t intend to suggest a solution for the entire scope of AI liability issues. My focus probably comes as much from my professional interest in thinking about how modest-dollar disputes can be effectively litigated as from anything else.
That being said, I think that small/mid-size disputes would collectively be an important source of liability that would encourage more careful development and frankly would slow down development a bit. Their litigation would also bring more public attention to the harms, and provide grist for development of the common law as to AI harms. (If few people sue because proving fault is too expensive, there will be little development in caselaw.)
Health insurance is to a large extent sui generis because it lacks many of the classic features of insurance. The insured = the beneficiary, and often has substantial control over whether and how much “loss” to incur. For instance, I could decide to ignore a ~mild skin condition, use cheap last-generation remedies, or seek coverage of newfangled stuff at $900/bottle (all paid by insurance and manufacturer coupon).
Furthermore, for public policy reasons, we won’t let the insurer react against the insured for claims history. In contrast, consider condo insurance—after my condo association had a water-leak claim, our deductible doubled and our water-damage deductible went up five-fold. I told the other unitowners that we could expect to ~lose water-damage coverage if we filed another such claim in the next few years.
In contrast, you don’t see these pathologies as much in, e.g., automobile insurance (the party bearing the loss often isn’t the insured, the insured does not often “choose” the loss in the same way, and insurance companies can and will raise rates and dump risky clients altogether).
I’m not confident of this as an empirical matter—do you have a citation?
First off, under my (limited) model, fault would not be an issue so there would be no litigating that. Causation and injury could be litigated, but the potential causal and injurious mechanics for AI harm are manifold. So you probably wouldn’t get the same degree of deep-pockets motivation as in (say) asbestos or tobacco cases, where the causation questions could be common to tens of thousands of suits.
Next, as a practicing lawyer, my experience is that repeat litigants want to bury/settle cases that would risk establishing bad precedent. They are also less likely to appeal than a one-off litigant. A prime example is the U.S. Government. Let’s say the district court decision decides you can sue the Government for X. That decision only controls the outcome of that specific case and maybe that specific litigant; it isn’t even binding on the same judge in a different case. You take that case to the court of appeals and lose, and now everyone in a multi-state region knows they can sue the government for X. They don’t have to invest in litigating whether the government is immune from suit for X before getting to the merits of their case. This is why appeals by the Government require the approval of the number 3 in the whole Department of Justice (the Solicitor General).
Finally, I think what you describe is unavoidable in any scenario where a repeat litigant has a large financial stake in certain issues. If you push a ton of liability on OpenAI (and it doesn’t pass that onto an insurer), it will have ~the same incentives that an insurer has in your model.
Right. I agree with you that regulation—or pre-emptive, injunctive litigation—are the viable paths for controlling risks that would bankrupt the company. You’re generally right that injunctive relief is not generally available when money damages would be seen as compensating for any harm, and the financial ability of the risk-creating party to pay those damages may often not get much weight. See, e.g., Brown et al v. Sec’y, HHS, No. 20-14210 (11th Cir. July 14, 2021). You could fix that by statute, though.
From a theoretical perspective, one reason that preliminary injunctions are an extraordinary remedy is that there is often little practical way to compensate the enjoined party if it is later determined that they were wrongly enjoined. To do much good, your proposal would have to be done on an expedited basis. But if the standard is not demanding, there are going to be a lot of injunctions issued that will be ultimately judged to be erroneous on further review. My guess is that you will not be too troubled by that, but a lot of people will.
Other issues with injunctions to resolve:
There is a well-known problem where a lot of people across the nation have standing to bring a lawsuit. They can all bring their own lawsuit, the scope of relief is necessarily nationwide, and they only have to convince one district judge (out of several hundred) that an injunction is warranted. There’s appellate review, but it is not quick and certain aspects are deferential (factual findings reviewed for clear error, weighing reviewed for abuse of discretion). This is why you often see major U.S. government policies enjoined up front, the conservatives know which districts and divisions to file in (often in Texas) and so do the progressives (often D.C.). This could be fixed by centralizing such petitions in one court and consolidating cases.
There’s also an international element of the same. If the AI development activity is happening in the US, should any country be able to enjoin it? In other words, should any country get a legal veto on AI activity anywhere in the world? This would raise a foreign-relations nightmare. It is well-known that the judiciary in many countries (including several major world powers) is not meaningfully independent, and even where it largely is there could be political influence. The temptation to enjoin other countries’ AI companies would be strong. This seems very hard to fix.
My guess is that the U.S. would pass a federal statute preventing enforcement of foreign injunctions in the U.S., and maybe allowing the President to take retaliatory action against foreign AI companies if they deemed the anti-U.S. company action to be motivated by strategic concerns.
If your answer is some sort of world court, there is still a real problem of enforcement by a home-country government that doesn’t want to enforce the injunction against its own interests.