I believe many in the effective altruism community, including me in the past, have at some point concluded that reducing the nearterm risk of human extinction is astronomically cost-effective. For this to hold, it has to increase the chance that the future has an astronomical value, which is what drives its expected value.
Nevertheless, reducing the nearterm risk of human extinction only obviously makes worlds with close to 0 value less likely. It does not have to make ones with astronomical value significantly more likely. A priori, I would say the probability mass is moved to nearby worlds which are just slightly better than the ones where humans go extinct soon. Consequently, interventions reducing nearterm extinction risk need not be astronomically cost-effective.
I wonder whether the conclusion that reducing the nearterm risk of human extinction is astronomically cost-effective may be explained by:
Thanks, interesting ideas. I overall wasn’t very persuaded—I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn’t the case. I didn’t read the whole dialogue but I think I mostly agree with Owen.
I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn’t the case.
I make some specific arguments:
As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically.
[...]
Here are some intuition pumps for why reducing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. In terms of:
Human life expectancy:
I have around 1 life of value left, whereas I calculated an expected value of the future of 1.40*10^52 lives.
Ensuring the future survives over 1 year, i.e. over 8*10^7 lives (= 8*10^(9 − 2)) for a lifespan of 100 years, is analogous to ensuring I survive over 5.71*10^-45 lives (= 8*10^7/​(1.40*10^52)), i.e. over 1.80*10^-35 seconds (= 5.71*10^-45*10^2*365.25*86400).
Decreasing my risk of death over such an infinitesimal period of time says basically nothing about whether I have significantly extended my life expectancy. In addition, I should be a priori very sceptical about claims that the expected value of my life will be significantly determined over that period (e.g. because my risk of death is concentrated there).
Similarly, I am guessing decreasing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. Additionally, I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades (e.g. because we are in a time of perils).
A missing pen:
If I leave my desk for 10 min, and a pen is missing when I come back, I should not assume the pen is equally likely to be in any 2 points inside a sphere of radius 180 M km (= 10*60*3*10^8) centred on my desk. Assuming the pen is around 180 M km away would be even less valid.
The probability of the pen being in my home will be much higher than outside it. The probability of being outside Portugal will be negligible, but the probability of being outside Europe even lower, and in Mars even lower still[5].
Similarly, if an intervention makes the least valuable future worlds less likely, I should not assume the missing probability mass is as likely to be in slightly more valuable worlds as in astronomically valuable worlds. Assuming the probability mass is all moved to the astronomically valuable worlds would be even less valid.
Moving mass:
For a given cost/​effort, the amount of physical mass one can transfer from one point to another decreases with the distance between them. If the distance is sufficiently large, basically no mass can be transferred.
Similarly, the probability mass which is transferred from the least valuable worlds to more valuable ones decreases with the distance (in value) between them. If the world is sufficiently faraway (valuable), basically no mass can be transferred.
Hi Oscar,
I would be curious to know your thoughts on my post Reducing the nearterm risk of human extinction is not astronomically cost-effective? (feel free to comment there).
Thanks, interesting ideas. I overall wasn’t very persuaded—I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn’t the case. I didn’t read the whole dialogue but I think I mostly agree with Owen.
I make some specific arguments: