I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
Thanks, Michael. For readersā reference, CLR stands for Center on Long-Term Risk.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.