Thanks for the post, Matthew! You argue the expected welfare of the future is astronomically large, but strong longtermism is not necesssarily implied by this. It is defined in Greaves and MacAskill (2021) as follows.
Axiological strong longtermism (ASL): In the most important decision situations facing agents today,
(i) Every option that is near-best overall is near-best for the far future.
(ii) Every option that is near-best overall delivers much larger benefits in the far future than in the near future.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
Thanks for the post, Matthew! You argue the expected welfare of the future is astronomically large, but strong longtermism is not necesssarily implied by this. It is defined in Greaves and MacAskill (2021) as follows.
I would say a 10^-100 chance of 10^100 QALY is as good as 1 QALY. However, even if I thought the risk of human extinction over the next 10 years was 10 % (I guess it is 10^-7), I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. 10^50 QALY), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. 10^40 QALY). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, I think the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to ābenefitsā^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to ābenefitsā*ābenefitsā^-(1 + alpha) = ābenefitsā^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.