This seems to agree with his criticism—that we care about the near-term only as it affects the long term, and can therefore justify ignoring even negative short term consequences of our actions if it leads to future benefits. It argues even more strongly for abandoning otherwise short term beneficial interventions with small longer term impacts.
Obvious examples of how this goes wrong include many economic planning projects of the 20th century, where the short term damage to communities, cities, and livelihoods was justified by incorrect claims about long term growth.
tl;dr: I basically agree with everything except “This seems to agree with his criticism”, because I think (from memory) that Masrani was making a stronger and less valid claim. (Though I’m not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think there’s a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/or empirical assumptions. Specifically, I think it’s true that:
The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
It seems very likely to be the case that what’s best for the long-term isn’t what’s the very best for the near-term
It seems plausible that what’s best for the long-term is actually net-negative for the near-term
This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
Various historical cases suggest that “ends justify the means” reasoning and attempts to enact grand, long-term visions often have net negative effects
Though I’m not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly “bite the bullet” of this critique—i.e., say that we can’t prioritise everything at once, and if the case for strong longtermism holds up then it’s appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of “ends justify the means” etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think it’s also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though I’m now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
The recent working paper by Hilary Greaves and William MacAskill puts forth the case for strong longtermism, a philosophy which says one should simply ignore the consequences of one’s actions if they take place over the “short term” timescale of 100 to 1000 years
And:
To reiterate, longtermism gives us permission to completely ignore the consequences of our actions over the next one thousand years, provided we don’t personally believe these actions will rise to the level of existential threats.
And:
longtermism encourages us to treat our fellow brothers and sisters with careless disregard for the next one thousand years, forever.
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isn’t especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: “At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years.” [emphasis added])
And while I agree that it’s sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, I’d prefer to use steelman arguments where possible to better understand the questions at hand.
This seems to agree with his criticism—that we care about the near-term only as it affects the long term, and can therefore justify ignoring even negative short term consequences of our actions if it leads to future benefits. It argues even more strongly for abandoning otherwise short term beneficial interventions with small longer term impacts.
Obvious examples of how this goes wrong include many economic planning projects of the 20th century, where the short term damage to communities, cities, and livelihoods was justified by incorrect claims about long term growth.
tl;dr: I basically agree with everything except “This seems to agree with his criticism”, because I think (from memory) that Masrani was making a stronger and less valid claim. (Though I’m not totally sure; it may have just been slightly sloppy writing + the other misconception that longtermism is necessarily solely focused on existential risk reduction.)
---
I think there’s a valid claim similar to what Masrani said, and that that could reasonably be seen as a criticism of longtermism given some reasonable moral and/or empirical assumptions. Specifically, I think it’s true that:
The very core of strong longtermism is that idea that the intrinsic importance of the effects of our actions on the long-term future is far greater than the intrinsic importance of the effects of our actions on the near-term, and thus that we should focus on how our actions affect the long-term (or, in other words, the near-term effects we should aim for are whichever ones are best for the long-term)
It seems very likely to be the case that what’s best for the long-term isn’t what’s the very best for the near-term
It seems plausible that what’s best for the long-term is actually net-negative for the near-term
This means acting according to strong longtermism will likely be worse for the near-term than acting according to (EA-style) neartermism, and might be net-negative for the near-term
Various historical cases suggest that “ends justify the means” reasoning and attempts to enact grand, long-term visions often have net negative effects
Though I’m not actually sure how often they had net negative effects vs having net positive effects, how this differs from other types of reasoning and planning, and how analogous those cases are to longtermist efforts in relevant ways)
But this might suggest that, in practice, strong longtermism is more likely to be bad for the near-term than it should be in theory
I would mostly “bite the bullet” of this critique—i.e., say that we can’t prioritise everything at once, and if the case for strong longtermism holds up then it’s appropriate that we prioritise the long-term at the expense of the short-term. And then I do think we should remain vigilant of ways our thinking, priorities, actions, etc. could mirror bad instances of “ends justify the means” etc.
But I could understand someone else being more worried about this objection.
Also, FWIW, I think the Greaves and MacAskill paper maybe fails to acknowledge that strong longtermism actions might be very strange or net-negative from a near-term perspective, rather than just not top priorities. (Though maybe I just forgot where they said this.) I made a related comment here.
---
We could steelman Masrani into making the above sorts of claims and then have a productive discussion. But I think it’s also useful to sometimes just talk about what someone actually said and correct things that are actually misleading or common misconceptions. And I think Masrani was making a stronger claim (though I’m now unsure, as mentioned at the top), which I also think some other people actually believe and which seems like a misconception worth correcting (see also). (To be fair, I think Greaves & MacAskill could maybe have been more careful with some phrasngs to avoid people forming this misconception.)
E.g. Masrani writes:
And:
And:
(But again, I now realise that this might have just been slightly sloppy writing + the x-risk misconception, and also that Greaves & MacAskill may have been slightly sloppy with some phrases as well in a way that contributed to this. So I think this point isn’t especially important as a critique of the post.
Though I guess my original statement still seems appropriate hedged: “At least in some places, Masrani seems to think or imply that longtermism doesn’t aim to influence any events that occur in the next (say) 1000 years.” [emphasis added])
I think we basically agree.
And while I agree that it’s sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, I’d prefer to use steelman arguments where possible to better understand the questions at hand.