Phil Torres’s tendency to misrepresent things aside, I think we need to take Phil Torres’s article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It’s not hard to read this sentence on the first page of (EDIT: the original version of) “The Case for Strong Longtermism”:
The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.
and conclude that, as Phil Torres does, longtermism means that we can justify causing present-day atrocities for a slight, let’s say 0.1% increase in the subjective probability of a valuable long-term future. Thinking rationally, atrocities do not improve the long-term future, and longtermists care a lot about stability. But with the framing given by “The Case for Strong Longtermism”, there is a small risk that is higher than it needs to be that future longtermists can be persuaded that atrocities would be justified, especially when subjective probabilities are so subjective. How can we reframe or redefine longtermism so that: firstly, we reduce the risk of longtermism being used to justify atrocities, and secondly (and I think more pressingly), reduce the risk that longtermism is generally seen as something that justifies atrocities?
It seems like this framing of longtermism is a far greater reputational risk to EA than, say, how 80,000 Hours over-emphasized earning to give, which is something that 80,000 Hours apparently seriously regrets. I think “The Case for Strong Longtermism” should be revised to not say things like “we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years”, without detailing significant caveats. It’s just a working paper—shouldn’t be too hard for Greaves and MacAskill to revise. (EDIT: this has already happened, as Aleks_K has pointed out below.) If there are many more articles like Phil Torres’s here written in other media in the near future, I would be very hesitant about using the term “longtermism”. Phil Torres is someone who is sympathetic to effective altruism and to existential risk reduction, someone who believes “you ought to care equally about people no matter when they exist”; now imagine if the article were written by someone who isn’t as sympathetic to EA.
(This really shouldn’t affect my argument, but I do generally agree with longtermism.)
I think “The Case for Strong Longtermism” should be revised to not say things like “we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years”, without detailing significant caveats.
FYI, this has already happened. The version you are linking to is outdated, and the updated version here does no longer contain this statement.
Phil Torres’s tendency to misrepresent things aside, I think we need to take Phil Torres’s article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It’s not hard to read this sentence on the first page of (EDIT: the original version of) “The Case for Strong Longtermism”:
and conclude that, as Phil Torres does, longtermism means that we can justify causing present-day atrocities for a slight, let’s say 0.1% increase in the subjective probability of a valuable long-term future. Thinking rationally, atrocities do not improve the long-term future, and longtermists care a lot about stability. But with the framing given by “The Case for Strong Longtermism”, there is a small risk that is higher than it needs to be that future longtermists can be persuaded that atrocities would be justified, especially when subjective probabilities are so subjective. How can we reframe or redefine longtermism so that: firstly, we reduce the risk of longtermism being used to justify atrocities, and secondly (and I think more pressingly), reduce the risk that longtermism is generally seen as something that justifies atrocities?
It seems like this framing of longtermism is a far greater reputational risk to EA than, say, how 80,000 Hours over-emphasized earning to give, which is something that 80,000 Hours apparently seriously regrets.
I think “The Case for Strong Longtermism” should be revised to not say things like “we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years”, without detailing significant caveats. It’s just a working paper—shouldn’t be too hard for Greaves and MacAskill to revise.(EDIT: this has already happened, as Aleks_K has pointed out below.) If there are many more articles like Phil Torres’s here written in other media in the near future, I would be very hesitant about using the term “longtermism”. Phil Torres is someone who is sympathetic to effective altruism and to existential risk reduction, someone who believes “you ought to care equally about people no matter when they exist”; now imagine if the article were written by someone who isn’t as sympathetic to EA.(This really shouldn’t affect my argument, but I do generally agree with longtermism.)
FYI, this has already happened. The version you are linking to is outdated, and the updated version here does no longer contain this statement.