If I understand it, the project is something like “how do your priorities differ if you focus on reducing bad things over promoting good things?”
This sounds accurate, but I was thinking of it with empirical cause prioritization already factored in. For instance, while a view like classical utilitarianism can be called “symmetrical” when it comes to normatively prioritizing good things and bad things (always with some element of arbitrariness because there are no “proper units” of happiness and suffering), in practice the view turns out to be upside-focused because, given our empirical situation, there is more room for creating happiness/good things than there is future expected suffering left to prevent. (Cf. the astronomical waste argument.)
This would go the other way if we had good reason to believe that the future will be very bad, but I think the classical utilitarians who are optimistic about the future (given their values) are right to be optimistic: If you count the creation of extreme happiness as not-a-lot-less important than the prevention of extreme suffering, then the future will in expectation be very valuable according to your values (see footnote [3]).
but I don’t see how you can on to draw anything conclusions about that because downside (as well as upside) morality covers so many different things.
My thinking is that when it comes to interventions that affect the long-term future, different normative views tend to converge roughly into two large clusters for the object-level interventions they recommend. If the future will be good for your value system, reducing exinction risks and existential risk related to “not realizing full potential” will be most important. If your value system makes it harder to attain vast amounts of positive value through bringing about large (in terms of time and/or space) utopian futures, then you want to focus specifically on (cooperative ways of) reducing suffering risks or downside risks generally. The cut-off point is determined by what the epistemically proper degree of optimism or pessimism is with regard to the quality of the long-term future, and to what extent we can have an impact on that. Meaning, if we had reason to believe that the future will be very negative and that effort to make the future contain vast amounts of happiness are very very very unlikely to ever work, then even classical utilitarianism would count as “downside-focused” according to my classification.
Some normative views simply don’t place much importance on creating new happy people, in which case they kind of come out as downside-focused by default (except for the consideration I mention in footnote 2). (If these views give a lot of weight to currently existing people, then they can be both downside-focused and give high priority to averting extinction risks, which is something I pointed out in the third-last paragraph in the section on extinction risks.)
Out of the five examples you mentioned, I’d say they fall into the two clusters as follows:
Downside-focused: absolute NU, lexical NU, lexical threshold NU and a “negative-leaning” utilitarianism that is sufficiently negative-leaning to counteract our empirical assessment of how much easier it will be to create happiness than to prevent suffering. The rest is upside-focused (maybe with some stuck at “could go either way”). How much is “sufficiently negative-leaning”? It becomes tricky because there are not really any “proper units” of happiness and suffering, so we have to first specify what we are comparing. See footnote 3: My own view is that the cut-off is maybe very roughly at around 100, but I mentioned “100 or maybe 1,000” to be on the conservative side. And these refer to comparing extreme happiness to extreme suffering. Needless to say, it is hard to predict the future and we should take such numbers with a lot of caution, and it seems legitimate for people to disagree. Though I should qualify that a bit: Say, if someone thinks that classical utilitarians should not work on extinction risk reduction because the future is too negative, or if someone thinks even strongly negative-leaning consequentialists should have the same ranking of priorities as classical utilitarians because the future is so very positive, then both of these have to explain away strong expert disagreement (at least within EA; I think outside of EA, people’s predictions are all over the place, with economists generally being more optimistic).
Lastly, I don’t think proponents of any value system should start to sabotage other people’s efforts, especially not since there are other ways to create value according to your own value systems that is altogether much more positive sum. Note that this – the dangers of naive/Machiavellian consequentialism – is a very general problem that reaches far deeper than just value differences. Say you have two EAs who both think creating happiness is 1/10th as important as reducing suffering. One is optimistic about the future, the other has become more pessimistic after reading about some new arguments. They try to talk out the disagreement, but do not reach agreement. Should the second EA now start to sabotage the efforts of the first one, or vice versa? That seems ill-advised; no good can come from going down that path.
This sounds accurate, but I was thinking of it with empirical cause prioritization already factored in. For instance, while a view like classical utilitarianism can be called “symmetrical” when it comes to normatively prioritizing good things and bad things (always with some element of arbitrariness because there are no “proper units” of happiness and suffering), in practice the view turns out to be upside-focused because, given our empirical situation, there is more room for creating happiness/good things than there is future expected suffering left to prevent. (Cf. the astronomical waste argument.)
This would go the other way if we had good reason to believe that the future will be very bad, but I think the classical utilitarians who are optimistic about the future (given their values) are right to be optimistic: If you count the creation of extreme happiness as not-a-lot-less important than the prevention of extreme suffering, then the future will in expectation be very valuable according to your values (see footnote [3]).
My thinking is that when it comes to interventions that affect the long-term future, different normative views tend to converge roughly into two large clusters for the object-level interventions they recommend. If the future will be good for your value system, reducing exinction risks and existential risk related to “not realizing full potential” will be most important. If your value system makes it harder to attain vast amounts of positive value through bringing about large (in terms of time and/or space) utopian futures, then you want to focus specifically on (cooperative ways of) reducing suffering risks or downside risks generally. The cut-off point is determined by what the epistemically proper degree of optimism or pessimism is with regard to the quality of the long-term future, and to what extent we can have an impact on that. Meaning, if we had reason to believe that the future will be very negative and that effort to make the future contain vast amounts of happiness are very very very unlikely to ever work, then even classical utilitarianism would count as “downside-focused” according to my classification.
Some normative views simply don’t place much importance on creating new happy people, in which case they kind of come out as downside-focused by default (except for the consideration I mention in footnote 2). (If these views give a lot of weight to currently existing people, then they can be both downside-focused and give high priority to averting extinction risks, which is something I pointed out in the third-last paragraph in the section on extinction risks.)
Out of the five examples you mentioned, I’d say they fall into the two clusters as follows: Downside-focused: absolute NU, lexical NU, lexical threshold NU and a “negative-leaning” utilitarianism that is sufficiently negative-leaning to counteract our empirical assessment of how much easier it will be to create happiness than to prevent suffering. The rest is upside-focused (maybe with some stuck at “could go either way”). How much is “sufficiently negative-leaning”? It becomes tricky because there are not really any “proper units” of happiness and suffering, so we have to first specify what we are comparing. See footnote 3: My own view is that the cut-off is maybe very roughly at around 100, but I mentioned “100 or maybe 1,000” to be on the conservative side. And these refer to comparing extreme happiness to extreme suffering. Needless to say, it is hard to predict the future and we should take such numbers with a lot of caution, and it seems legitimate for people to disagree. Though I should qualify that a bit: Say, if someone thinks that classical utilitarians should not work on extinction risk reduction because the future is too negative, or if someone thinks even strongly negative-leaning consequentialists should have the same ranking of priorities as classical utilitarians because the future is so very positive, then both of these have to explain away strong expert disagreement (at least within EA; I think outside of EA, people’s predictions are all over the place, with economists generally being more optimistic).
Lastly, I don’t think proponents of any value system should start to sabotage other people’s efforts, especially not since there are other ways to create value according to your own value systems that is altogether much more positive sum. Note that this – the dangers of naive/Machiavellian consequentialism – is a very general problem that reaches far deeper than just value differences. Say you have two EAs who both think creating happiness is 1/10th as important as reducing suffering. One is optimistic about the future, the other has become more pessimistic after reading about some new arguments. They try to talk out the disagreement, but do not reach agreement. Should the second EA now start to sabotage the efforts of the first one, or vice versa? That seems ill-advised; no good can come from going down that path.