I upvoted this comment because it is a very valuable contribution to the debate. However, I also gave it an “x” vote (what is that called? disagree?) because I strongly disagree with the conclusion and recommendation.
Very briefly, everything you write here is factually (as best I know) true. There are serious obstacles to creating and to enforcing strict liability. And to do so would probably be unfair to some AI researchers who do not intend harm.
However, we need to think in a slightly more utilitarian manner. Maybe being unfair to some AI developers is the lesser of two evils in an imperfect world.
I come from the world of chemical engineering, and I’ve worked some time in Pharma. In these areas, there is not “strict liability” as such, in the sense that you typically do not go to jail if you can demonstrate that you have done everything by the book.
BUT—the “book” for chemical engineering or pharma is a much, much longer book, based on many decades of harsh lessons. Whatever project I might want to do, I would have to follow very strict, detailed guidelines every step of the way. If I develop a new drug, it might require more than a decade of testing before I can put it on the market, and if I make one flaw in that decade, I can be held criminally liable. If I build a factory and there is an accident, they can check every detail of every pump and pipe and reactor, they can check every calculation and every assumption I’ve made, and if just one of them is mistaken, or if just one time (even with a very valid reason) I have chosen not to follow the recommended standards, I can be criminally and civilly liable.
We have far more knowledge about how to create and test drugs than we have on how to create and test AI models. And in our wisdom, we believe it takes a decade to prove that a drug is safe to be released on the market.
We don’t have anything analogous to this for AI. So nobody (credible) is arguing that strict liability is an ideal solution or a fair solution. The argument is that, until we have a much better AI Governance system in place, with standards and protocols and monitoring systems and so on, then strict liability is one of the best ways we can ensure that people act responsibly in developing, testing and releasing models.
The AI developers like to argue that we’re stifling innovation if we don’t give them totally free-rein to do whatever they find interesting or promising. But this is not how the world works. There are thousands of frustrated pharmacologists who have ideas for drugs that might do wonders for some patients, but which are 3 years into a 10-year testing cycle instead of already saving lives. But they understand that this is necessary to create a world in which patients know that any drug that is prescribed by their doctor is safe for them (or that it’s potential risks are understood).
Strict liability is, in a way, telling AI model developers: “You say that your model is safe. OK, put your money where your mouth is. If you’re so sure that it’s safe, then you shouldn’t have any worries about strict-liability. If you’re not sure that it’s safe, then you shouldn’t be releasing it.”
This feels to me like a reasonable starting point. If AI-labs have a model which they believe is valuable but flawed (e.g. risk of bias), they do have the option to release it with that warning—for example to refuse to accept liability for certain identified risks. Lawmakers can then decide if that’s OK or not. It may take time, but eventually we’ll move forward.
Right now, it’s the Wild West. I can understand the frustration of people with brilliant models which could do much good in the world, but we need to apply the same safety standards that we apply to everyone else.
Strict liability is neither ideal nor fair. It’s just, right now, the best option until we find a better one.
Even with decades of development of pharma knowledge, and a complex regulatory system, things still blow up badly (e.g., Vioxx). And the pharma companies usually end up paying through the nose in liability, too. Here, we have a much weaker body of built-up knowledge and much weaker ex ante regulation.
I upvoted this comment because it is a very valuable contribution to the debate. However, I also gave it an “x” vote (what is that called? disagree?) because I strongly disagree with the conclusion and recommendation.
Very briefly, everything you write here is factually (as best I know) true. There are serious obstacles to creating and to enforcing strict liability. And to do so would probably be unfair to some AI researchers who do not intend harm.
However, we need to think in a slightly more utilitarian manner. Maybe being unfair to some AI developers is the lesser of two evils in an imperfect world.
I come from the world of chemical engineering, and I’ve worked some time in Pharma. In these areas, there is not “strict liability” as such, in the sense that you typically do not go to jail if you can demonstrate that you have done everything by the book.
BUT—the “book” for chemical engineering or pharma is a much, much longer book, based on many decades of harsh lessons. Whatever project I might want to do, I would have to follow very strict, detailed guidelines every step of the way. If I develop a new drug, it might require more than a decade of testing before I can put it on the market, and if I make one flaw in that decade, I can be held criminally liable. If I build a factory and there is an accident, they can check every detail of every pump and pipe and reactor, they can check every calculation and every assumption I’ve made, and if just one of them is mistaken, or if just one time (even with a very valid reason) I have chosen not to follow the recommended standards, I can be criminally and civilly liable.
We have far more knowledge about how to create and test drugs than we have on how to create and test AI models. And in our wisdom, we believe it takes a decade to prove that a drug is safe to be released on the market.
We don’t have anything analogous to this for AI. So nobody (credible) is arguing that strict liability is an ideal solution or a fair solution. The argument is that, until we have a much better AI Governance system in place, with standards and protocols and monitoring systems and so on, then strict liability is one of the best ways we can ensure that people act responsibly in developing, testing and releasing models.
The AI developers like to argue that we’re stifling innovation if we don’t give them totally free-rein to do whatever they find interesting or promising. But this is not how the world works. There are thousands of frustrated pharmacologists who have ideas for drugs that might do wonders for some patients, but which are 3 years into a 10-year testing cycle instead of already saving lives. But they understand that this is necessary to create a world in which patients know that any drug that is prescribed by their doctor is safe for them (or that it’s potential risks are understood).
Strict liability is, in a way, telling AI model developers: “You say that your model is safe. OK, put your money where your mouth is. If you’re so sure that it’s safe, then you shouldn’t have any worries about strict-liability. If you’re not sure that it’s safe, then you shouldn’t be releasing it.”
This feels to me like a reasonable starting point. If AI-labs have a model which they believe is valuable but flawed (e.g. risk of bias), they do have the option to release it with that warning—for example to refuse to accept liability for certain identified risks. Lawmakers can then decide if that’s OK or not. It may take time, but eventually we’ll move forward.
Right now, it’s the Wild West. I can understand the frustration of people with brilliant models which could do much good in the world, but we need to apply the same safety standards that we apply to everyone else.
Strict liability is neither ideal nor fair. It’s just, right now, the best option until we find a better one.
Even with decades of development of pharma knowledge, and a complex regulatory system, things still blow up badly (e.g., Vioxx). And the pharma companies usually end up paying through the nose in liability, too. Here, we have a much weaker body of built-up knowledge and much weaker ex ante regulation.