Interesting idea, but I forsee several challenges in implementation:
First, few organizational outcomes are truly binary—it is possible that the organization gets some, but not all, of its objectives achieved, in which case there is going to be litigation about whether the actual outcome is an insured loss.
Second, it is going to be expensive for an insurance company to develop an accurate sense of the odds of success, especially because many of the relevant pieces of information are under the control of the organization and may be very difficult to measure without organizational influence. If I were the insurer, I’d require a significant application fee just to provide a quote, and I would quote very conservatively.
Third, incentives can change. Even if the insurer believes your assertion about preferences, those preferences could change over time and you could then have an incentive to “throw” the first project. Detecting failure to provide “best efforts” is challenging and uncertain. I think the workaround for this is for the insurance company to require significant co-insurance—e.g., only half of the loss from the failed initiative is covered. That gives the organization a much more concrete sense of skin in the game than its mere assertions that it would prefer success to collecting the insurance payout.
Finally, the hypothetical scenario (in which you don’t seem to have any good alternative use for $2B) is fairly unlikely. That doesn’t mean that insurance would have no use cases, only that they may be limited.
One interesting possible application would be having different EA cause areas potentially “insure” each other. E.g., if animal-welfare people want to try a high-risk, mega-high-reward intervention but is having a hard time tolerating the idea of losing some high-value and fairly safe options if the intervention fails, groups from another cause area might be willing to “insure.” As opposed to an insurance company, other EAs are going to be better at developing an accurate sense of the odds of success and assessing whether the insured’s interests are likely to change.
Moreover, the insurance “payout” would likely still have good value for the “insuring” EAs—even if I would not have donated to animal-welfare causes in the first instance, the fulfillment of high value options in that area still brings me utilons. Likewise, if you’re a animal-welfare person, the payment of an insurance “premium” to global health/development still generates utilons in your book, even if not as many as applied to animal welfare.
Interesting idea, but I forsee several challenges in implementation:
First, few organizational outcomes are truly binary—it is possible that the organization gets some, but not all, of its objectives achieved, in which case there is going to be litigation about whether the actual outcome is an insured loss.
Second, it is going to be expensive for an insurance company to develop an accurate sense of the odds of success, especially because many of the relevant pieces of information are under the control of the organization and may be very difficult to measure without organizational influence. If I were the insurer, I’d require a significant application fee just to provide a quote, and I would quote very conservatively.
Third, incentives can change. Even if the insurer believes your assertion about preferences, those preferences could change over time and you could then have an incentive to “throw” the first project. Detecting failure to provide “best efforts” is challenging and uncertain. I think the workaround for this is for the insurance company to require significant co-insurance—e.g., only half of the loss from the failed initiative is covered. That gives the organization a much more concrete sense of skin in the game than its mere assertions that it would prefer success to collecting the insurance payout.
Finally, the hypothetical scenario (in which you don’t seem to have any good alternative use for $2B) is fairly unlikely. That doesn’t mean that insurance would have no use cases, only that they may be limited.
One interesting possible application would be having different EA cause areas potentially “insure” each other. E.g., if animal-welfare people want to try a high-risk, mega-high-reward intervention but is having a hard time tolerating the idea of losing some high-value and fairly safe options if the intervention fails, groups from another cause area might be willing to “insure.” As opposed to an insurance company, other EAs are going to be better at developing an accurate sense of the odds of success and assessing whether the insured’s interests are likely to change.
Moreover, the insurance “payout” would likely still have good value for the “insuring” EAs—even if I would not have donated to animal-welfare causes in the first instance, the fulfillment of high value options in that area still brings me utilons. Likewise, if you’re a animal-welfare person, the payment of an insurance “premium” to global health/development still generates utilons in your book, even if not as many as applied to animal welfare.