Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I’m assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn’t expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don’t think there’s much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.
Thanks for saying a bit more about how you’re interpreting “scope of longtermism”. To be as concrete as possible, what I’m assuming is that we both read Thorstad as saying “a philanthropist giving money away so as to maximize the good from a classical utilitarian perspective” is typically outside the scope of decision-situations that are longtermist, but let me know if you read him differently on that. (I think it’s helpful to focus on this case because it’s simple, and the one G&M most clearly argue is longtermist on the basis of those two premises.)
It’s a tautology that the G&M conclusion that the above decision-situation is longtermist follows from the premises, and no, I wouldn’t expect a paper disputing the conclusion to argue against this tautology. I would expect it to argue, directly or indirectly, against the premises. And you’ve done just that: you’ve offered two perfectly reasonable arguments for why the G&M premise (ii) might be false, i.e. giving to PS/B612F might not actually do 2x as much good in the long term as the GiveWell charity in the short term. (1) In footnote 2, you point out that the chance of near-term x-risk from AI may be very high. (2) You say that the funding needs of asteroid monitoring sufficient to alert us to impending catastrophe are plausibly already met. You also suggest in footnote 3 that maybe NGOs will do a worse job of it than the government.
I won’t argue against any of these possibilities, since the topic of this particular comment thread is not how strong the case for longtermism is all things considered, but whether Thorstad’s “Scope of LTism” successfully responds to G&M’s argument. I really don’t think there’s much more to say. If there’s a place in “Scope of LTism” where Thorstad offers an argument against (i) or (ii), as you’ve done, I’m still not seeing it.