tl;dr: I basically agree with everything except āThis seems to agree with his criticismā, because I think (from memory) that Masrani was making a stronger and less valid claim. (Though Iām not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think thereās a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/āor empirical assumptions. Specifically, I think itās true that:
The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
It seems very likely to be the case that whatās best for the long-term isnāt whatās the very best for the near-term
It seems plausible that whatās best for the long-term is actually net-negative for the near-term
This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
Various historical cases suggest that āends justify the meansā reasoning and attempts to enact grand, long-term visions often have net negative effects
Though Iām not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly ābite the bulletā of this critiqueāi.e., say that we canāt prioritise everything at once, and if the case for strong longtermism holds up then itās appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of āends justify the meansā etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think itās also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though Iām now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
The recent working paper by Hilary Greaves and William MacAskill puts forth the case for strong longtermism, a philosophy which says one should simply ignore the consequences of oneās actions if they take place over the āshort termā timescale of 100 to 1000 years
And:
To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we donāt personally believe these actions will rise to the level of existential threats.
And:
longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever.
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isnāt especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: āAt least in some places, Masrani seems to think or imply that longtermism doesnāt aim to influence any events that occur in the next (say) 1000 years.ā [emphasis added])
And while I agree that itās sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, Iād prefer to use steelman arguments where possible to better understand the questions at hand.
tl;dr: I basically agree with everything except āThis seems to agree with his criticismā, because I think (from memory) that Masrani was making a stronger and less valid claim. (Though Iām not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think thereās a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/āor empirical assumptions. Specifically, I think itās true that:
The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
It seems very likely to be the case that whatās best for the long-term isnāt whatās the very best for the near-term
It seems plausible that whatās best for the long-term is actually net-negative for the near-term
This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
Various historical cases suggest that āends justify the meansā reasoning and attempts to enact grand, long-term visions often have net negative effects
Though Iām not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly ābite the bulletā of this critiqueāi.e., say that we canāt prioritise everything at once, and if the case for strong longtermism holds up then itās appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of āends justify the meansā etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think itās also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though Iām now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
And:
And:
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isnāt especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: āAt least in some places, Masrani seems to think or imply that longtermism doesnāt aim to influence any events that occur in the next (say) 1000 years.ā [emphasis added])
I think we basically agree.
And while I agree that itās sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, Iād prefer to use steelman arguments where possible to better understand the questions at hand.