Thanks for the detailed reply on that! Youāve clearly thought about this a lot, and Iām very happy to believe youāre right on the impact of nuclear war, but It sounds like you are more or less opting for what I called option 1? In which case, just substitute nuclear war for a threat that would literally cause extinction with high probability (say release of a carefully engineered pathogen with high fatality rate, long incubation period, and high infectiousness). Wouldnāt that meaningfully affect utility for more than a few centuries? Because there would be literally no one left, and that effect is guaranteed to be persistent! Even if it ājustā reduced the population by 99%, that seems like it would very plausibly have effects for thousands of years into the future.
It seems to me that to avoid this, you have to either say that causing extinction (or near extinction level catastrophe) is virtually impossible, through any means, (what I was describing as option 1) or go the other extreme and say that it is virtually guaranteed in the short term anyway, so that counterfactual impact disappears quickly (what I was describing as option 2). Just so I understand what youāre saying, are you claiming one of these two things? Or is there another way out that Iām missing?
Thanks for the kind words. I was actually unsure whether I should have followed up given my comments in this thread had been downvoted (all else equal, I do not want to annoy readers!), so it is good to get some information.
Thanks for the detailed reply on that! Youāve clearly thought about this a lot, and Iām very happy to believe youāre right on the impact of nuclear war, but It sounds like you are more or less opting for what I called option 1? In which case, just substitute nuclear war for a threat that would literally cause extinction with high probability (say release of a carefully engineered pathogen with high fatality rate, long incubation period, and high infectiousness). Wouldnāt that meaningfully affect utility for more than a few centuries? Because there would be literally no one left, and that effect is guaranteed to be persistent! Even if it ājustā reduced the population by 99%, that seems like it would very plausibly have effects for thousands of years into the future.
I think the effect of the intervention will still decrease to practically 0 in at most a few centuries in that case, such that reducing the nearterm risk of human extinction is not astronomically cost-effective. I guess you are imagining that humans either go extinct or have a long future where they go on to realise lots of value. However, this is overly binary in my view. I elaborate on this in the post I linked to at the start of this paragraph, and its comments.
It seems to me that to avoid this, you have to either say that causing extinction (or near extinction level catastrophe) is virtually impossible, through any means, (what I was describing as option 1) or go the other extreme and say that it is virtually guaranteed in the short term anyway, so that counterfactual impact disappears quickly (what I was describing as option 2). Just so I understand what youāre saying, are you claiming one of these two things? Or is there another way out that Iām missing?
I guess the probability of human extinction in the next 10 years is around 10^-7, i.e. very unlikely, but far from impossible.
Iām at least finding it useful figuring out exactly where we disagree. Please stop replying if itās taking too much of your time, but not because of the downvotes!
I guess you are imagining that humans either go extinct or have a long future where they go on to realise lots of value.
This isnāt quite what Iām saying, depending on what you mean by ālotsā and ālongā. For your āimpossible for an intervention to have counterfactual effects for more than a few centuriesā claim to be false, we only need the future of humanity to have a non-tiny chance of being longer than a few centuries (not that long), and for there to be conceivable interventions which have a non-tiny chance of very quickly causing extinction. These interventions would then meaningfully affect counterfactual utility for more than a few centuries.
To be more concrete and less binary, suppose we are considering an intervention that has a risk p of almost immediately leading to extinction, and otherwise does nothing. Let U be the expected utility generated in a year, in 500 years time, absent any intervention. If you decide to make this intervention, that has the effect of changing U to (1-p)U, and so the utility generated in that far future year has been changed by pU.
For this to be tiny/ānon-meaningful, we either need p to be tiny, or U to be tiny (or both).
Are you saying:
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
1, in the sense I think the change in the immediate risk of human extinction per cost is astronomically low for any conceivable intervention. Relatedly, you may want to check my discussion with Larks in the post I linked to.
Thanks for the detailed reply on that! Youāve clearly thought about this a lot, and Iām very happy to believe youāre right on the impact of nuclear war, but It sounds like you are more or less opting for what I called option 1? In which case, just substitute nuclear war for a threat that would literally cause extinction with high probability (say release of a carefully engineered pathogen with high fatality rate, long incubation period, and high infectiousness). Wouldnāt that meaningfully affect utility for more than a few centuries? Because there would be literally no one left, and that effect is guaranteed to be persistent! Even if it ājustā reduced the population by 99%, that seems like it would very plausibly have effects for thousands of years into the future.
It seems to me that to avoid this, you have to either say that causing extinction (or near extinction level catastrophe) is virtually impossible, through any means, (what I was describing as option 1) or go the other extreme and say that it is virtually guaranteed in the short term anyway, so that counterfactual impact disappears quickly (what I was describing as option 2). Just so I understand what youāre saying, are you claiming one of these two things? Or is there another way out that Iām missing?
Thanks for the kind words. I was actually unsure whether I should have followed up given my comments in this thread had been downvoted (all else equal, I do not want to annoy readers!), so it is good to get some information.
I think the effect of the intervention will still decrease to practically 0 in at most a few centuries in that case, such that reducing the nearterm risk of human extinction is not astronomically cost-effective. I guess you are imagining that humans either go extinct or have a long future where they go on to realise lots of value. However, this is overly binary in my view. I elaborate on this in the post I linked to at the start of this paragraph, and its comments.
I guess the probability of human extinction in the next 10 years is around 10^-7, i.e. very unlikely, but far from impossible.
Iām at least finding it useful figuring out exactly where we disagree. Please stop replying if itās taking too much of your time, but not because of the downvotes!
This isnāt quite what Iām saying, depending on what you mean by ālotsā and ālongā. For your āimpossible for an intervention to have counterfactual effects for more than a few centuriesā claim to be false, we only need the future of humanity to have a non-tiny chance of being longer than a few centuries (not that long), and for there to be conceivable interventions which have a non-tiny chance of very quickly causing extinction. These interventions would then meaningfully affect counterfactual utility for more than a few centuries.
To be more concrete and less binary, suppose we are considering an intervention that has a risk p of almost immediately leading to extinction, and otherwise does nothing. Let U be the expected utility generated in a year, in 500 years time, absent any intervention. If you decide to make this intervention, that has the effect of changing U to (1-p)U, and so the utility generated in that far future year has been changed by pU.
For this to be tiny/ānon-meaningful, we either need p to be tiny, or U to be tiny (or both).
Are you saying:
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
1, in the sense I think the change in the immediate risk of human extinction per cost is astronomically low for any conceivable intervention. Relatedly, you may want to check my discussion with Larks in the post I linked to.