My general intuition is that if thereās a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so.
I might be misunderstanding you here, so apologies if the rest of this comment is talking past you. But I think the really key point for me is simply that, the ālargerā and ābetterā the future would be if we get things right,[1] the more important it is to get things right. (This also requires a few moral assumptions, e.g. that wellbeing matters equally whenever it happens.)
To take it to the extreme, if we knew with certainty that extinction was absolutely guaranteed in 100 years, then that massively reduces the value of reducing extinction risk before that time. On the other extreme, if we knew with certainty that if we reduce AI risk in the next 100 years, the future will last 1 trillion years, contain 1 trillion sentient creatures per year, and they will all be very happy, free, aesthetically stimulated, having interesting experiences, etc., then that makes reducing AI risk extremely important.
A similar point can also apply with negative futures. If thereās a non-trivial chance that some risk would result in a net negative future, then knowing how long that will last, how many beings would be in it, and how negative it is for those beings is relevant to how bad that outcome would be.
Most of the benefits of avoiding extinction or other negative lock-ins accrue more than 100 years from now, whereas (Iād argue) most of the predictable benefits of things like bednet distribution accrue within the next 100 years. So the relative priority of the two broad intervention categories could depend on how ālargeā and āgoodā the future would be if we avoid negative lock-ins. And that depends on having at least some guesses about the world more than 100 years from now (though they could be low-confidence and big-picture, rather than anything very confident or precise).[1]
So I guess Iām wondering whether youāre uncomfortable with, or inclined to dismiss, even those sorts of low-confidence, big-picture guesses, or just the more confident and precise guesses?
(Btw, I think the paper The Case for Strong Longtermism is very good, and it makes the sort of argument Iām making much more rigorously than Iām making it here, so that could be worth checking out.)
[1] If weāre total utilitarians, we could perhaps interpret ālargerā and ābetterā as a matter of how long civilization or whatever lasts, how many beings there are per unit of time during that period, and how high their average wellbeing is. But I think the same basic point stands given other precise views and operationalisations.
[2] Put another way, I think I do expect that most things that are top priorities for their impact >100 years from now will also be much better in terms of their impact in the next 100 years than random selfish uses of resources would be. (And this will tend to be because the risks might occur in the next 100 years, or because things that help us deal with the risks also help us deal with other things.) But I donāt necessarily expect them to be better than things like bednet distribution, which have been selected specifically for their high near-term impact.
I might be misunderstanding you here, so apologies if the rest of this comment is talking past you. But I think the really key point for me is simply that, the ālargerā and ābetterā the future would be if we get things right,[1] the more important it is to get things right. (This also requires a few moral assumptions, e.g. that wellbeing matters equally whenever it happens.)
To take it to the extreme, if we knew with certainty that extinction was absolutely guaranteed in 100 years, then that massively reduces the value of reducing extinction risk before that time. On the other extreme, if we knew with certainty that if we reduce AI risk in the next 100 years, the future will last 1 trillion years, contain 1 trillion sentient creatures per year, and they will all be very happy, free, aesthetically stimulated, having interesting experiences, etc., then that makes reducing AI risk extremely important.
A similar point can also apply with negative futures. If thereās a non-trivial chance that some risk would result in a net negative future, then knowing how long that will last, how many beings would be in it, and how negative it is for those beings is relevant to how bad that outcome would be.
Most of the benefits of avoiding extinction or other negative lock-ins accrue more than 100 years from now, whereas (Iād argue) most of the predictable benefits of things like bednet distribution accrue within the next 100 years. So the relative priority of the two broad intervention categories could depend on how ālargeā and āgoodā the future would be if we avoid negative lock-ins. And that depends on having at least some guesses about the world more than 100 years from now (though they could be low-confidence and big-picture, rather than anything very confident or precise).[1]
So I guess Iām wondering whether youāre uncomfortable with, or inclined to dismiss, even those sorts of low-confidence, big-picture guesses, or just the more confident and precise guesses?
(Btw, I think the paper The Case for Strong Longtermism is very good, and it makes the sort of argument Iām making much more rigorously than Iām making it here, so that could be worth checking out.)
[1] If weāre total utilitarians, we could perhaps interpret ālargerā and ābetterā as a matter of how long civilization or whatever lasts, how many beings there are per unit of time during that period, and how high their average wellbeing is. But I think the same basic point stands given other precise views and operationalisations.
[2] Put another way, I think I do expect that most things that are top priorities for their impact >100 years from now will also be much better in terms of their impact in the next 100 years than random selfish uses of resources would be. (And this will tend to be because the risks might occur in the next 100 years, or because things that help us deal with the risks also help us deal with other things.) But I donāt necessarily expect them to be better than things like bednet distribution, which have been selected specifically for their high near-term impact.