By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”
I want to clarify my thoughts around longtermism as an idea—and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.
I’m doing a literature search but because this is primarily an EA concept that I’m familiar with from within EA I’m mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I’d like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.
Thanks!
“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.
“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.
Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.
My essay on consequentialist cluelessness is also about this: What consequences?
Thanks! The top paper seems very relevant in particular.
This is not exactly what you’re looking for, but the best summary of objections I’m aware of is from the Strong Longtermism paper by Greaves and MacAskill.
Thanks—I’ve read the summaries of this but hadn’t twigged it was developed into a full paper
Most people don’t value not-yet-existing people as much as people already alive. I think it is the EA community holding the fringe position here, not the other way around. Neither is total utilitarianism a majority view among philosophers. (You might want to look into critiques of utilitarianism.)
If you pair this value judgement with a belief that existential risk is less valuable to work on than other issues for affecting people this century, you will probably want to work on “non-longtermist” problems.
I don’t think longtermism depends on either (i) valuing future people equally to presently alive people or (ii) total utilitarianism (or utilitarianism in general), so I don’t think these are great counterarguments unless further fleshed out. Instead it depends on something much more general like ‘whatever is of value, there could be a lot more of it in the future’.
[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]
Yes, but in addition your view in normative ethics needs to have suitable features, such as:
A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won’t imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
[Only for ‘deontic longtermism’:] A sufficiently prominent role of beneficence, i.e. ‘doing what has the best axiological consequences’, in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.
(Examples are to illustrate the point, not to suggest they are plausible views.)
I’m concerned that some presentations of “non-consequentialist” reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.
So for example, sure, if we don’t prevent extinction we are uncooperative toward previous generations because we frustrate their ‘grand project of humanity’. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.
Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren’t plausible is contentious. So I think it’s important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.
That’s very fair, I should have been a lot more specific in my original comment. I have been a bit disappointed that within EA longtermism is so often framed in utilitarian terms—I have found the collection of moral arguments in favour of protecting the long-term future brought forth in The Precipice a lot more compelling and wish they would come up more frequently.
I agree!
I also like the arguments in The Precipice. But per my above comment, I’m not sure if they are arguments for longtermism, strictly speaking. As far as I recall, The Precipice argues for something like “preventing existential risk is among our most important moral concerns”. This is consistent with, but neither implied nor required by longtermism: if you e.g. thought that there are 10 other moral concerns of similar weight, and you choose to mostly focus on those, I don’t think your view is ‘longtermist’ even in the weak sense. This is similar to how someone who thinks that protecting the environment is somewhat important but doesn’t focus on this concern would not be called an environmentalist.
Yes, I agree with that too—see my comments later in the thread. I think it would be great to be clearer that the arguments for xrisk and longtermism are separate (and neither depends on utilitarianism).