Iām at least finding it useful figuring out exactly where we disagree. Please stop replying if itās taking too much of your time, but not because of the downvotes!
I guess you are imagining that humans either go extinct or have a long future where they go on to realise lots of value.
This isnāt quite what Iām saying, depending on what you mean by ālotsā and ālongā. For your āimpossible for an intervention to have counterfactual effects for more than a few centuriesā claim to be false, we only need the future of humanity to have a non-tiny chance of being longer than a few centuries (not that long), and for there to be conceivable interventions which have a non-tiny chance of very quickly causing extinction. These interventions would then meaningfully affect counterfactual utility for more than a few centuries.
To be more concrete and less binary, suppose we are considering an intervention that has a risk p of almost immediately leading to extinction, and otherwise does nothing. Let U be the expected utility generated in a year, in 500 years time, absent any intervention. If you decide to make this intervention, that has the effect of changing U to (1-p)U, and so the utility generated in that far future year has been changed by pU.
For this to be tiny/ānon-meaningful, we either need p to be tiny, or U to be tiny (or both).
Are you saying:
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
1, in the sense I think the change in the immediate risk of human extinction per cost is astronomically low for any conceivable intervention. Relatedly, you may want to check my discussion with Larks in the post I linked to.
Iām at least finding it useful figuring out exactly where we disagree. Please stop replying if itās taking too much of your time, but not because of the downvotes!
This isnāt quite what Iām saying, depending on what you mean by ālotsā and ālongā. For your āimpossible for an intervention to have counterfactual effects for more than a few centuriesā claim to be false, we only need the future of humanity to have a non-tiny chance of being longer than a few centuries (not that long), and for there to be conceivable interventions which have a non-tiny chance of very quickly causing extinction. These interventions would then meaningfully affect counterfactual utility for more than a few centuries.
To be more concrete and less binary, suppose we are considering an intervention that has a risk p of almost immediately leading to extinction, and otherwise does nothing. Let U be the expected utility generated in a year, in 500 years time, absent any intervention. If you decide to make this intervention, that has the effect of changing U to (1-p)U, and so the utility generated in that far future year has been changed by pU.
For this to be tiny/ānon-meaningful, we either need p to be tiny, or U to be tiny (or both).
Are you saying:
There are no concievable interventions someone could make with p non-tiny.
U, expected utility in a year in 500 years time, is approximately 0.
Something elseā¦ my setup of the situation is wrong, or unrealistic..?
1, in the sense I think the change in the immediate risk of human extinction per cost is astronomically low for any conceivable intervention. Relatedly, you may want to check my discussion with Larks in the post I linked to.