Far-future effects are the most important determinant of what we ought to do
I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side.
A couple points responding to some of the comments:
You should have some non trivial probability in the times of perils hypothesis/lock in (perhaps in large part because AI might be a big deal) -- the idea that we’re living in a time where the chances of existential risk are particularly high but if we get past it the rate of x risk will go down indefinitely (or at least for a very long while). This is plausible because increasing uncertainty as time goes furthes, as thorstad points out, makes x risk rate regress to mean, and the mean is quite plausibly low. If this is true, you don’t need to make so many claims about the far future in order to have massive amount of impact on it.
A lot of people refer to pascal’s mugging or fanaticism here, which I don’t usually think is correct. (Unless we reject pascal’s mugging for ambuity aversion reasons, which i am uncertain about but probably don’t) the probabilities that people usually put on longtermism are not near the kind of bets we shouldn’t take if we’re anti being fanatical because we take these similarly low probabilities all the time—for instance, having fire extinguishers, wearing seatbelts, maybe most clinical trials. Unless you have significantly lower probability than that, invoking pascal’s mugging feels a bit overly pessimistic about our ability to affect things like this. Also (and this is a cheeky move), if you just have some non-mugging-level probability in that claim being correct, you probably still get far future being most important without a mugging.
On the other hand, a one point against that I don’t think was brought up:
In the XPT, the Superforecaster median prediction was that there will only exist 500 billion humans (not near as many as, say, the bostrom or newberry numbers, which may make the cost + tractability concerns potentially such that it’s not as important in expectation as, say, affecting very large amounts of shrimp or insects now (to be fair, the 95th percentile superforecaster was at 100 trillion, so maybe the uncertainty becomes fairly assymetrical quickly, though).
Point 1 in favour reads very much like “focus on near-future benefits because this will (most likely) bring far-future benefits”, which is in practice indistinguishable from just “focus on near-future benefits”. Plus the assumption—improving near-future will most likely improve far-future, which I also tend to think—is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn’t do X? Answering assuming this scenario doesn’t give any information.
Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.
I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of “most important determinant of what we ought to do.” (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn’t affect what one would do in light of short term damage, I think that would say less about one’s actual beliefs and more about their intuitions of disgust towards means-end-reasoning—but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
It depends on the case, but there are definitely cases where I would.
Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn’t influence the answer to this question.
We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)
I agree with a bunch of the skepticism in the comments made, but (to me) it seems like there are not enough people on the strong longtermist side.
A couple points responding to some of the comments:
You should have some non trivial probability in the times of perils hypothesis/lock in (perhaps in large part because AI might be a big deal) -- the idea that we’re living in a time where the chances of existential risk are particularly high but if we get past it the rate of x risk will go down indefinitely (or at least for a very long while). This is plausible because increasing uncertainty as time goes furthes, as thorstad points out, makes x risk rate regress to mean, and the mean is quite plausibly low. If this is true, you don’t need to make so many claims about the far future in order to have massive amount of impact on it.
A lot of people refer to pascal’s mugging or fanaticism here, which I don’t usually think is correct. (Unless we reject pascal’s mugging for ambuity aversion reasons, which i am uncertain about but probably don’t) the probabilities that people usually put on longtermism are not near the kind of bets we shouldn’t take if we’re anti being fanatical because we take these similarly low probabilities all the time—for instance, having fire extinguishers, wearing seatbelts, maybe most clinical trials. Unless you have significantly lower probability than that, invoking pascal’s mugging feels a bit overly pessimistic about our ability to affect things like this. Also (and this is a cheeky move), if you just have some non-mugging-level probability in that claim being correct, you probably still get far future being most important without a mugging.
On the other hand, a one point against that I don’t think was brought up:
In the XPT, the Superforecaster median prediction was that there will only exist 500 billion humans (not near as many as, say, the bostrom or newberry numbers, which may make the cost + tractability concerns potentially such that it’s not as important in expectation as, say, affecting very large amounts of shrimp or insects now (to be fair, the 95th percentile superforecaster was at 100 trillion, so maybe the uncertainty becomes fairly assymetrical quickly, though).
Point 1 in favour reads very much like “focus on near-future benefits because this will (most likely) bring far-future benefits”, which is in practice indistinguishable from just “focus on near-future benefits”. Plus the assumption—improving near-future will most likely improve far-future, which I also tend to think—is far from certain (you acknowledge it). With this reasoning, technically, the underlying reason to do X is improving the far-future, indeed. But the actual effect of X is improving the near-future with much higher certainty than improving the far-future; everything is aligned. This is not, AFAI understand, the point of the question. Who wouldn’t do X? Answering assuming this scenario doesn’t give any information.
Consider the following different scenario: doing X will make near-future worse with much higher certainty than it does far-future better (assume same value generation magnitudes as before, just make the value for near-future negative). Would you then advocate doing X? I think this gives the information the question is asking for.
I would overwhelmingly most likely not do X in that scenario. Because I know how dumb we (I) am and how complex reality is, so the longer the time frame, the less I trust any reasoning (cluelessness) [It is difficult enough to find out whether an action taken has actually been net positive for the immediate future!]. Would you? If your answer tends to No, then the far-future effects are not the most important determinant of what we ought to do for you.
It depends on the case, but there are definitely cases where I would.
Also, while you make a good point that these can sometimes converge, I think the priority of concerns is extremely different under shortermism vs longtermism, which i see as the important part of “most important determinant of what we ought to do.” (Setting aside mugging and risk aversion/robustness) some very small or even directional shift could make something hold the vast majority of your moral weight, as opposed to before where the impact might have not been that big/ impact would have been outweighed by lack of neglectedness or tractability.
P.S. If one (including myself) failed to do x, given that it would shift priorities but wouldn’t affect what one would do in light of short term damage, I think that would say less about one’s actual beliefs and more about their intuitions of disgust towards means-end-reasoning—but this is just a hunch and somewhat based on my own introspection (to be fair, sometimes this comes from moral uncertainty/reputational concerns that should be used in this reasoning and is to your point).
Then it definitely fits with your vote. I just meant that the fact that you (and me) tend to think that making the near-future better also will make the far-future better shouldn’t influence the answer to this question.
We just disagree on how confident we are on our assessment of how our actions will affect the far-future. And probably this is because of our age ;-)