Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices—but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)
The biggest fish—which I assume are the ones you are going to be most worried about from a GCR perspective—are very likely to self-insure.
I’m also less confident in insurers’ abilities to insist on and monitor risk from AI development than risk exposure from application of AI. For instance, it seems a lot easier for a third party (who knows much less about AI systems than the insured) to figure out “You shouldn’t let AI determine the results of that CT scan without a human overread” than “You shouldn’t use technique X to grow your AI technology.”
Yes, I see a strong argument for the claim that the companies are in the best position to shoulder the harms that will inevitably come along, and pass that risk onto their customers through higher prices—but the other critical part is that this also changes incentives because liability insurers will demand the firms mitigate the risks. (And this is approaching the GCR argument, from a different side.)
The biggest fish—which I assume are the ones you are going to be most worried about from a GCR perspective—are very likely to self-insure.
I’m also less confident in insurers’ abilities to insist on and monitor risk from AI development than risk exposure from application of AI. For instance, it seems a lot easier for a third party (who knows much less about AI systems than the insured) to figure out “You shouldn’t let AI determine the results of that CT scan without a human overread” than “You shouldn’t use technique X to grow your AI technology.”