By “scope of longtermism” I took Thorstad’s reference to “class of decision situations” in terms of permutations to be evaluated (maximising welfare, maximising human proliferation, minimising suffering etc) rather than categories of basic actions (spending, voting, selecting clothing).[1] I’m not actually sure it makes a difference to my interpretation of the thrust of his argument (diminution, washing out and unawareness means solutions whose far future impact swamps short term benefits are vanishingly rare and generally unknowable) either way.
Sure, Thorstad absolutely starts off by conceding that under certain assumptions about the long term future,[2] a low probability but robustly positive action like preparing to stop asteroids from hitting earth which indirectly enables benefits to accrue over the very long term absolutely can be a valid priority.[3] But it doesn’t follow that one should prioritise the long term future in every decision making situation in which money is given away. The funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met[4], and his core argument is we’re otherwise almost always clueless about what the [near] best solution for the long term future is. It’s not a particularly good heuristic to focus spending on outcomes you are most likely to be clueless about, and a standard approach to accumulation of uncertainty is to discount for it, which of course privileges the short term.
I mean, I agree that Thorstad makes no dent in arguments to the effect that if thereis an action which leads to positive utility sustained over a very long period of time for a very large number of people it will result in very high utility relative to actions which don’t have that impact: I’m not sure that argument is even falsifiable within a total utilitarian framework.[5] But I don’t think his intention is to argue with [near] tautologies, so much as to insist that the set of decisions which credibly result in robustly positive long term impact is small enough to usually be irrelevant.
Some of the implicit assumptions behind the salience of asteroid x-risk aren’t robust: if AI doomers are right then that massive positive future we’re trying to protect looks a lot smaller. On the other hand compared with almost any other x-risk scenario, asteroids are straightforward: we don’t have to factor in the possibility of asteroids becoming sneaky in response to us monitoring them, or attach much weight to the idea that informing people about asteroids will motivate them to try harder to make it hit the earth.
you correctly point out his choice of asteroid monitoring service is different from Greaves and MacAskill’s. I assume he does so partly to steelman the original, as the counterfactual impact of a government agency incubating the first large-scale asteroid monitoring programme is more robust than that of the marginal donation to NGOs providing additional analysis. And he doesn’t make this point, but I doubt the arguments that decided its funding actually depended on the very long term anyway....
Likewise, pretty much anyone familiar with total utilitarianism can conceive a credible scenario in which the highest total utility outcome would be to murder a particular individual (baby Hitler etc), and I don’t think it would be credible to insist such a situation could never occur or never be known. This would not, however, fatally weaken arguments against the principle of “murderism” that focused on doubting there were many decision situations where murder should be considered as a priority
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I’m assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn’t expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don’t think there’s much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.
By “scope of longtermism” I took Thorstad’s reference to “class of decision situations” in terms of permutations to be evaluated (maximising welfare, maximising human proliferation, minimising suffering etc) rather than categories of basic actions (spending, voting, selecting clothing).[1] I’m not actually sure it makes a difference to my interpretation of the thrust of his argument (diminution, washing out and unawareness means solutions whose far future impact swamps short term benefits are vanishingly rare and generally unknowable) either way.
Sure, Thorstad absolutely starts off by conceding that under certain assumptions about the long term future,[2] a low probability but robustly positive action like preparing to stop asteroids from hitting earth which indirectly enables benefits to accrue over the very long term absolutely can be a valid priority.[3] But it doesn’t follow that one should prioritise the long term future in every decision making situation in which money is given away. The funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met[4], and his core argument is we’re otherwise almost always clueless about what the [near] best solution for the long term future is. It’s not a particularly good heuristic to focus spending on outcomes you are most likely to be clueless about, and a standard approach to accumulation of uncertainty is to discount for it, which of course privileges the short term.
I mean, I agree that Thorstad makes no dent in arguments to the effect that if there is an action which leads to positive utility sustained over a very long period of time for a very large number of people it will result in very high utility relative to actions which don’t have that impact: I’m not sure that argument is even falsifiable within a total utilitarian framework.[5] But I don’t think his intention is to argue with [near] tautologies, so much as to insist that the set of decisions which credibly result in robustly positive long term impact is small enough to usually be irrelevant.
all of which can be reframed in terms of making money to spend available to spend on priorities” in classic “hardcore EA” style anyway...
Some of the implicit assumptions behind the salience of asteroid x-risk aren’t robust: if AI doomers are right then that massive positive future we’re trying to protect looks a lot smaller. On the other hand compared with almost any other x-risk scenario, asteroids are straightforward: we don’t have to factor in the possibility of asteroids becoming sneaky in response to us monitoring them, or attach much weight to the idea that informing people about asteroids will motivate them to try harder to make it hit the earth.
you correctly point out his choice of asteroid monitoring service is different from Greaves and MacAskill’s. I assume he does so partly to steelman the original, as the counterfactual impact of a government agency incubating the first large-scale asteroid monitoring programme is more robust than that of the marginal donation to NGOs providing additional analysis. And he doesn’t make this point, but I doubt the arguments that decided its funding actually depended on the very long term anyway....
this is possibly another reason for his choice of asteroid monitoring service...
Likewise, pretty much anyone familiar with total utilitarianism can conceive a credible scenario in which the highest total utility outcome would be to murder a particular individual (baby Hitler etc), and I don’t think it would be credible to insist such a situation could never occur or never be known. This would not, however, fatally weaken arguments against the principle of “murderism” that focused on doubting there were many decision situations where murder should be considered as a priority
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I’m assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn’t expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don’t think there’s much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.