No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Occasionally there are even claims [among effective altruists] to the effect that “shaping the far future is 1030 times more important than working on present-day issues,” based on a naive comparison of the number of lives that exist now to the number that might exist in the future.
I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (…) Note that these are arguments about ex ante expected value, not necessarily actual impact. (…) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.
When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.
It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.
...
Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far “upstream”, e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?
Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, “The utilitarian imperative ‘Maximize expected aggregate utility!‘” might not really, as Bostrom (2002) puts it, “be simplified to the maxim ‘Minimize existential risk’”.
For the record I’m not really sure about 1030 times, but I’m open 1000s of times.
And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn’t necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative—I’m just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If I’m utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it’s sort of like comparing 1030 to undefined so it does get a bit weird...).
Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don’t we pretty much get to (ii)?
No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:
Phil Trammell’s point in Which World Gets Saved is also relevant:
For the record I’m not really sure about 1030 times, but I’m open 1000s of times.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn’t necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative—I’m just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If I’m utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it’s sort of like comparing 1030 to undefined so it does get a bit weird...).
Does that make any sense?