Thanks. Might be more useful if you explain why the arguments weren’t persuasive to you. Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world. Anyway, even if we start from your premise, it doesn’t mean strict liability would work better than a fault-based liability system (as we demonstrated in Argument 1).
Might be more useful if you explain why the arguments weren’t persuasive to you
So my position is that most of your arguments are worth some “debate points” but that mitigating potential x-risks outweigh this.
Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world
I’ve personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly change. At the same time, my intuition is that if we’re going to pursue the liability route, at least strict liability has the advantage of keeping the developer focused on preventing the issue from occurring rather than taking actions to avoid legal responsibility. Those actions won’t help, so they need to focus on preventing the issue from occurring.
I know that I wrote above:
In any case my main worry about strong liability laws is that we may create a situation where AI developers end up thinking primarily about dodging liability more than actually making the AI safe.
and that this is in tension with what I’m writing now. I guess upon reflection I now feel that my concerns about strong liability laws only apply to strong fault-based liability laws, not to strict liability laws, so in retrospect I wouldn’t have included this sentence.
Regarding your discussion in point 1 - apologies for not addressing this in my initial reply—I just don’t buy that courts being able to handle chainsaws or medical or actuary evidence means that they’re equipped to handle transformative AI given how fast the situation is changing and how disputed many of the key questions are. Plus the stakes involved play a role in me not wanting to take a risk here/make an unnecessary bet on the capabilities of the courts. Even if there was a 90% chance that the courts would be fine, I’d prefer to avoid the 10% probability that they aren’t.
Thanks. Might be more useful if you explain why the arguments weren’t persuasive to you. Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world. Anyway, even if we start from your premise, it doesn’t mean strict liability would work better than a fault-based liability system (as we demonstrated in Argument 1).
So my position is that most of your arguments are worth some “debate points” but that mitigating potential x-risks outweigh this.
I’ve personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly change. At the same time, my intuition is that if we’re going to pursue the liability route, at least strict liability has the advantage of keeping the developer focused on preventing the issue from occurring rather than taking actions to avoid legal responsibility. Those actions won’t help, so they need to focus on preventing the issue from occurring.
I know that I wrote above:
and that this is in tension with what I’m writing now. I guess upon reflection I now feel that my concerns about strong liability laws only apply to strong fault-based liability laws, not to strict liability laws, so in retrospect I wouldn’t have included this sentence.
Regarding your discussion in point 1 - apologies for not addressing this in my initial reply—I just don’t buy that courts being able to handle chainsaws or medical or actuary evidence means that they’re equipped to handle transformative AI given how fast the situation is changing and how disputed many of the key questions are. Plus the stakes involved play a role in me not wanting to take a risk here/make an unnecessary bet on the capabilities of the courts. Even if there was a 90% chance that the courts would be fine, I’d prefer to avoid the 10% probability that they aren’t.