To the extent your proposed approach analogizes to aviation law, the nature of harms in that field strike me as different in a way that is practically relevant. In aviation safety, the harms tend to be very large—when planes crash, people die or are seriously injured, usually multiple people. That means there is usually enough money at play to incentivize litigants and lawyers to put up the resources to fight it out over fault/liability in a complex field.
In contrast, while the AI harms people on this Forum worry about are even more catastrophic than plane crashes, most AI harms will be of a rather more mundane sort. Imagine that an AI somehow causes me to break my leg, incur medical bills, and lose a few weeks of income with damages in the range of $50K-$100K. Even with the benefit of a reverse burden of proof, it’s not reasonable to expect me to litigate the fault of a complex AI system for those stakes. Few people would sue the AI companies for non-catastrophic harms, and so they would not bear the full costs of their negligent acts.
If you limit an airplane to the ground, you ~have an automobile—and I think that may be a better metaphor for most AI-harm litigation. In my view, the correct system for managing most auto-related harms is no-fault insurance (with a fault-based supplemental system for extremely high damages). There’s no expensive litigation over fault or negligence, which would consume a great portion of any potential recovery for most harms. You can see no-fault as a form of strict liability (albeit handled on a first-party insurance basis).
I think other rationales support that system as well here. You seem to have much more faith in lawmakers and regulators to adapt to rapidly changing technologies than I do. Coming up with regulations that fairly cover all uses of AI until the next time the regulations are updated will prove practically impossible. The common-law method of making law through precedent is very slow and not up to the task. Even worse, from a US perspective, many of these issues will be fact questions for a jury of laypersons to decide (and this is constitutionally required to some extent!)
To be sure, no-fault auto insurance requires all drivers to share the cost of compensating injured parties. That’s not feasible in the AI world, where most potential injured parties are more like pedestrians than fellow drivers. But it doesn’t seem unreasonable to me to expect AI users to carry insurance that compensates harmed persons on a no-fault basis—at least for harms too small to expect the injured party to litigate potential fault. I’d probably prefer it be held primarily by the end user of the AI (e.g., a company using AI to make decisions or take actions). Let the market decide which uses of AI are net positive for society and should move forward. If it’s not insurable, you probably shouldn’t be doing it.
An alternative way to deal with the disincentive effect would be awarding at least partial attorney’s fees and full litigation costs for plaintiffs who have a plausible basis for suing the AI company, win or lose (and full fees for those who prevail). That would be a bonanza for people in my profession, but might be worse for the AI companies than strict liability!
P.S. If you’re going to go with a fault-based system, you absolutely have to repeal arbitration laws to the extent that they restrict the availability of class actions. If 10,000 people have a $1,000 harm under similar circumstances, fault could potentially be addressed in a class action but certainly not in individual arbitrations.
I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease. And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents. I also don’t think that it matters for tail risks—unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them.
One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action, as opposed to my understanding that pursuing this type of prior restraint for uncertain harms isn’t possible in most domains.
I’d be interested in your thoughts on these points, as well as Cecil and Marie’s.
As an initial note, I don’t think my proposed model is that different from strict liability for small-to-midsize harms. But framing it in insurance terms is more politically palatable, and also allowed me to riff off the analogy between aviation law and automobile law to explain why I didn’t think the aviation-law analogy worked in a lot of cases. I didn’t intend to suggest a solution for the entire scope of AI liability issues. My focus probably comes as much from my professional interest in thinking about how modest-dollar disputes can be effectively litigated as from anything else.
That being said, I think that small/mid-size disputes would collectively be an important source of liability that would encourage more careful development and frankly would slow down development a bit. Their litigation would also bring more public attention to the harms, and provide grist for development of the common law as to AI harms. (If few people sue because proving fault is too expensive, there will be little development in caselaw.)
I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease.
Health insurance is to a large extent sui generis because it lacks many of the classic features of insurance. The insured = the beneficiary, and often has substantial control over whether and how much “loss” to incur. For instance, I could decide to ignore a ~mild skin condition, use cheap last-generation remedies, or seek coverage of newfangled stuff at $900/bottle (all paid by insurance and manufacturer coupon).
Furthermore, for public policy reasons, we won’t let the insurer react against the insured for claims history. In contrast, consider condo insurance—after my condo association had a water-leak claim, our deductible doubled and our water-damage deductible went up five-fold. I told the other unitowners that we could expect to ~lose water-damage coverage if we filed another such claim in the next few years.
In contrast, you don’t see these pathologies as much in, e.g., automobile insurance (the party bearing the loss often isn’t the insured, the insured does not often “choose” the loss in the same way, and insurance companies can and will raise rates and dump risky clients altogether).
And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents.
I’m not confident of this as an empirical matter—do you have a citation?
First off, under my (limited) model, fault would not be an issue so there would be no litigating that. Causation and injury could be litigated, but the potential causal and injurious mechanics for AI harm are manifold. So you probably wouldn’t get the same degree of deep-pockets motivation as in (say) asbestos or tobacco cases, where the causation questions could be common to tens of thousands of suits.
Next, as a practicing lawyer, my experience is that repeat litigants want to bury/settle cases that would risk establishing bad precedent. They are also less likely to appeal than a one-off litigant. A prime example is the U.S. Government. Let’s say the district court decision decides you can sue the Government for X. That decision only controls the outcome of that specific case and maybe that specific litigant; it isn’t even binding on the same judge in a different case. You take that case to the court of appeals and lose, and now everyone in a multi-state region knows they can sue the government for X. They don’t have to invest in litigating whether the government is immune from suit for X before getting to the merits of their case. This is why appeals by the Government require the approval of the number 3 in the whole Department of Justice (the Solicitor General).
Finally, I think what you describe is unavoidable in any scenario where a repeat litigant has a large financial stake in certain issues. If you push a ton of liability on OpenAI (and it doesn’t pass that onto an insurer), it will have ~the same incentives that an insurer has in your model.
I also don’t think that it matters for tail risks—unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them.
Right. I agree with you that regulation—or pre-emptive, injunctive litigation—are the viable paths for controlling risks that would bankrupt the company. You’re generally right that injunctive relief is not generally available when money damages would be seen as compensating for any harm, and the financial ability of the risk-creating party to pay those damages may often not get much weight. See, e.g., Brown et al v. Sec’y, HHS, No. 20-14210 (11th Cir. July 14, 2021). You could fix that by statute, though.
One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action [ . . . .]
From a theoretical perspective, one reason that preliminary injunctions are an extraordinary remedy is that there is often little practical way to compensate the enjoined party if it is later determined that they were wrongly enjoined. To do much good, your proposal would have to be done on an expedited basis. But if the standard is not demanding, there are going to be a lot of injunctions issued that will be ultimately judged to be erroneous on further review. My guess is that you will not be too troubled by that, but a lot of people will.
Other issues with injunctions to resolve:
There is a well-known problem where a lot of people across the nation have standing to bring a lawsuit. They can all bring their own lawsuit, the scope of relief is necessarily nationwide, and they only have to convince one district judge (out of several hundred) that an injunction is warranted. There’s appellate review, but it is not quick and certain aspects are deferential (factual findings reviewed for clear error, weighing reviewed for abuse of discretion). This is why you often see major U.S. government policies enjoined up front, the conservatives know which districts and divisions to file in (often in Texas) and so do the progressives (often D.C.). This could be fixed by centralizing such petitions in one court and consolidating cases.
There’s also an international element of the same. If the AI development activity is happening in the US, should any country be able to enjoin it? In other words, should any country get a legal veto on AI activity anywhere in the world? This would raise a foreign-relations nightmare. It is well-known that the judiciary in many countries (including several major world powers) is not meaningfully independent, and even where it largely is there could be political influence. The temptation to enjoin other countries’ AI companies would be strong. This seems very hard to fix.
My guess is that the U.S. would pass a federal statute preventing enforcement of foreign injunctions in the U.S., and maybe allowing the President to take retaliatory action against foreign AI companies if they deemed the anti-U.S. company action to be motivated by strategic concerns.
If your answer is some sort of world court, there is still a real problem of enforcement by a home-country government that doesn’t want to enforce the injunction against its own interests.
To the extent your proposed approach analogizes to aviation law, the nature of harms in that field strike me as different in a way that is practically relevant. In aviation safety, the harms tend to be very large—when planes crash, people die or are seriously injured, usually multiple people. That means there is usually enough money at play to incentivize litigants and lawyers to put up the resources to fight it out over fault/liability in a complex field.
In contrast, while the AI harms people on this Forum worry about are even more catastrophic than plane crashes, most AI harms will be of a rather more mundane sort. Imagine that an AI somehow causes me to break my leg, incur medical bills, and lose a few weeks of income with damages in the range of $50K-$100K. Even with the benefit of a reverse burden of proof, it’s not reasonable to expect me to litigate the fault of a complex AI system for those stakes. Few people would sue the AI companies for non-catastrophic harms, and so they would not bear the full costs of their negligent acts.
If you limit an airplane to the ground, you ~have an automobile—and I think that may be a better metaphor for most AI-harm litigation. In my view, the correct system for managing most auto-related harms is no-fault insurance (with a fault-based supplemental system for extremely high damages). There’s no expensive litigation over fault or negligence, which would consume a great portion of any potential recovery for most harms. You can see no-fault as a form of strict liability (albeit handled on a first-party insurance basis).
I think other rationales support that system as well here. You seem to have much more faith in lawmakers and regulators to adapt to rapidly changing technologies than I do. Coming up with regulations that fairly cover all uses of AI until the next time the regulations are updated will prove practically impossible. The common-law method of making law through precedent is very slow and not up to the task. Even worse, from a US perspective, many of these issues will be fact questions for a jury of laypersons to decide (and this is constitutionally required to some extent!)
To be sure, no-fault auto insurance requires all drivers to share the cost of compensating injured parties. That’s not feasible in the AI world, where most potential injured parties are more like pedestrians than fellow drivers. But it doesn’t seem unreasonable to me to expect AI users to carry insurance that compensates harmed persons on a no-fault basis—at least for harms too small to expect the injured party to litigate potential fault. I’d probably prefer it be held primarily by the end user of the AI (e.g., a company using AI to make decisions or take actions). Let the market decide which uses of AI are net positive for society and should move forward. If it’s not insurable, you probably shouldn’t be doing it.
An alternative way to deal with the disincentive effect would be awarding at least partial attorney’s fees and full litigation costs for plaintiffs who have a plausible basis for suing the AI company, win or lose (and full fees for those who prevail). That would be a bonanza for people in my profession, but might be worse for the AI companies than strict liability!
P.S. If you’re going to go with a fault-based system, you absolutely have to repeal arbitration laws to the extent that they restrict the availability of class actions. If 10,000 people have a $1,000 harm under similar circumstances, fault could potentially be addressed in a class action but certainly not in individual arbitrations.
I think that the use of insurance for moderate harms is often a commercial boondoggle for insurers, a la health insurance, which breaks incentives in many ways an leads to cost disease. And typical insurance regimes shift burden of proof about injury in damaging ways because insurers have deep pockets to deny claims in court and fight cases that establish precedents. I also don’t think that it matters for tail risks—unless explicitly mandating unlimited coverage, firms will have caps in the millions of dollars, and will ignore tail risks that will bankrupt them.
One way to address the tail, in place of strict liability, would be legislation allowing anticipated harms to be stopped via legal action, as opposed to my understanding that pursuing this type of prior restraint for uncertain harms isn’t possible in most domains.
I’d be interested in your thoughts on these points, as well as Cecil and Marie’s.
As an initial note, I don’t think my proposed model is that different from strict liability for small-to-midsize harms. But framing it in insurance terms is more politically palatable, and also allowed me to riff off the analogy between aviation law and automobile law to explain why I didn’t think the aviation-law analogy worked in a lot of cases. I didn’t intend to suggest a solution for the entire scope of AI liability issues. My focus probably comes as much from my professional interest in thinking about how modest-dollar disputes can be effectively litigated as from anything else.
That being said, I think that small/mid-size disputes would collectively be an important source of liability that would encourage more careful development and frankly would slow down development a bit. Their litigation would also bring more public attention to the harms, and provide grist for development of the common law as to AI harms. (If few people sue because proving fault is too expensive, there will be little development in caselaw.)
Health insurance is to a large extent sui generis because it lacks many of the classic features of insurance. The insured = the beneficiary, and often has substantial control over whether and how much “loss” to incur. For instance, I could decide to ignore a ~mild skin condition, use cheap last-generation remedies, or seek coverage of newfangled stuff at $900/bottle (all paid by insurance and manufacturer coupon).
Furthermore, for public policy reasons, we won’t let the insurer react against the insured for claims history. In contrast, consider condo insurance—after my condo association had a water-leak claim, our deductible doubled and our water-damage deductible went up five-fold. I told the other unitowners that we could expect to ~lose water-damage coverage if we filed another such claim in the next few years.
In contrast, you don’t see these pathologies as much in, e.g., automobile insurance (the party bearing the loss often isn’t the insured, the insured does not often “choose” the loss in the same way, and insurance companies can and will raise rates and dump risky clients altogether).
I’m not confident of this as an empirical matter—do you have a citation?
First off, under my (limited) model, fault would not be an issue so there would be no litigating that. Causation and injury could be litigated, but the potential causal and injurious mechanics for AI harm are manifold. So you probably wouldn’t get the same degree of deep-pockets motivation as in (say) asbestos or tobacco cases, where the causation questions could be common to tens of thousands of suits.
Next, as a practicing lawyer, my experience is that repeat litigants want to bury/settle cases that would risk establishing bad precedent. They are also less likely to appeal than a one-off litigant. A prime example is the U.S. Government. Let’s say the district court decision decides you can sue the Government for X. That decision only controls the outcome of that specific case and maybe that specific litigant; it isn’t even binding on the same judge in a different case. You take that case to the court of appeals and lose, and now everyone in a multi-state region knows they can sue the government for X. They don’t have to invest in litigating whether the government is immune from suit for X before getting to the merits of their case. This is why appeals by the Government require the approval of the number 3 in the whole Department of Justice (the Solicitor General).
Finally, I think what you describe is unavoidable in any scenario where a repeat litigant has a large financial stake in certain issues. If you push a ton of liability on OpenAI (and it doesn’t pass that onto an insurer), it will have ~the same incentives that an insurer has in your model.
Right. I agree with you that regulation—or pre-emptive, injunctive litigation—are the viable paths for controlling risks that would bankrupt the company. You’re generally right that injunctive relief is not generally available when money damages would be seen as compensating for any harm, and the financial ability of the risk-creating party to pay those damages may often not get much weight. See, e.g., Brown et al v. Sec’y, HHS, No. 20-14210 (11th Cir. July 14, 2021). You could fix that by statute, though.
From a theoretical perspective, one reason that preliminary injunctions are an extraordinary remedy is that there is often little practical way to compensate the enjoined party if it is later determined that they were wrongly enjoined. To do much good, your proposal would have to be done on an expedited basis. But if the standard is not demanding, there are going to be a lot of injunctions issued that will be ultimately judged to be erroneous on further review. My guess is that you will not be too troubled by that, but a lot of people will.
Other issues with injunctions to resolve:
There is a well-known problem where a lot of people across the nation have standing to bring a lawsuit. They can all bring their own lawsuit, the scope of relief is necessarily nationwide, and they only have to convince one district judge (out of several hundred) that an injunction is warranted. There’s appellate review, but it is not quick and certain aspects are deferential (factual findings reviewed for clear error, weighing reviewed for abuse of discretion). This is why you often see major U.S. government policies enjoined up front, the conservatives know which districts and divisions to file in (often in Texas) and so do the progressives (often D.C.). This could be fixed by centralizing such petitions in one court and consolidating cases.
There’s also an international element of the same. If the AI development activity is happening in the US, should any country be able to enjoin it? In other words, should any country get a legal veto on AI activity anywhere in the world? This would raise a foreign-relations nightmare. It is well-known that the judiciary in many countries (including several major world powers) is not meaningfully independent, and even where it largely is there could be political influence. The temptation to enjoin other countries’ AI companies would be strong. This seems very hard to fix.
My guess is that the U.S. would pass a federal statute preventing enforcement of foreign injunctions in the U.S., and maybe allowing the President to take retaliatory action against foreign AI companies if they deemed the anti-U.S. company action to be motivated by strategic concerns.
If your answer is some sort of world court, there is still a real problem of enforcement by a home-country government that doesn’t want to enforce the injunction against its own interests.