Should we spend a lot of time talking about longtermist philosophy?
On 1, I think it is more crux-y than you do, probably (and especailly that it will be in the future). I think currently, there are some big āmarketā inefficiencies where even shortermists donāt care as much as idealised versions of their utility functions would. If shortermists institutions start acting more instrumentally rationally, lots of the low-hanging fruit of x-risk reduction interventions will be taken, and longtermists will need to focus specifically on the weirder things that are more specific to our views. E.g. ensuring the future is large, and that we donāt spread wild animal suffering to the stars, etc. So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.
On 2, I also donāt want to spend much more time on longtermist philosphy since I am already so convinced of longtermism that I expect another critique like all the ones we have already had wonāt move me much. And I agree better-futures style work (especially empirically groudned work) seems more promising.
> So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.
Yeah, this seems reasonable to me. Max Nadeau also pointed out something similar to me (longtermism is clearly not a crux for supporting GCR work, but also clearly important for how e.g. OP relatively prioritises x risk reduction work vs mere GCR reduction work). I should have been clearer that I agree ānot necessary for xriskā doesnāt mean ānot relevantā, and Iām more intending to answer ānoā to your (2) than ānoā to your (1).
(We might still relatively disagree over your (1) and what your (2) should entail āfor example, Iād guess Iām a bit more worried about predicting the effects of our actions than you, and more pessimistic about āgeneral abstract thinking from a longtermist POVā than you are.)
I would like to separate out two issues:
Is longtermism a crux for our decisions?
Should we spend a lot of time talking about longtermist philosophy?
On 1, I think it is more crux-y than you do, probably (and especailly that it will be in the future). I think currently, there are some big āmarketā inefficiencies where even shortermists donāt care as much as idealised versions of their utility functions would. If shortermists institutions start acting more instrumentally rationally, lots of the low-hanging fruit of x-risk reduction interventions will be taken, and longtermists will need to focus specifically on the weirder things that are more specific to our views. E.g. ensuring the future is large, and that we donāt spread wild animal suffering to the stars, etc. So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.
On 2, I also donāt want to spend much more time on longtermist philosphy since I am already so convinced of longtermism that I expect another critique like all the ones we have already had wonāt move me much. And I agree better-futures style work (especially empirically groudned work) seems more promising.
Thanks for commenting!
> So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.
Yeah, this seems reasonable to me. Max Nadeau also pointed out something similar to me (longtermism is clearly not a crux for supporting GCR work, but also clearly important for how e.g. OP relatively prioritises x risk reduction work vs mere GCR reduction work). I should have been clearer that I agree ānot necessary for xriskā doesnāt mean ānot relevantā, and Iām more intending to answer ānoā to your (2) than ānoā to your (1).
(We might still relatively disagree over your (1) and what your (2) should entail āfor example, Iād guess Iām a bit more worried about predicting the effects of our actions than you, and more pessimistic about āgeneral abstract thinking from a longtermist POVā than you are.)