[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]
Instead it depends on something much more general like ‘whatever is of value, there could be a lot more of it in the future’.
Yes, but in addition your view in normative ethics needs to have suitable features, such as:
A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won’t imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
[Only for ‘deontic longtermism’:] A sufficiently prominent role of beneficence, i.e. ‘doing what has the best axiological consequences’, in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.
(Examples are to illustrate the point, not to suggest they are plausible views.)
I’m concerned that some presentations of “non-consequentialist” reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.
So for example, sure, if we don’t prevent extinction we are uncooperative toward previous generations because we frustrate their ‘grand project of humanity’. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.
Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren’t plausible is contentious. So I think it’s important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.
[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]
Yes, but in addition your view in normative ethics needs to have suitable features, such as:
A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won’t imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
[Only for ‘deontic longtermism’:] A sufficiently prominent role of beneficence, i.e. ‘doing what has the best axiological consequences’, in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.
(Examples are to illustrate the point, not to suggest they are plausible views.)
I’m concerned that some presentations of “non-consequentialist” reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.
So for example, sure, if we don’t prevent extinction we are uncooperative toward previous generations because we frustrate their ‘grand project of humanity’. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.
Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren’t plausible is contentious. So I think it’s important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.