I’m struggling to wrap my head around the difference between upside and downside focused morality. I tried to read the rest of the document, but I kept thinking “hold on, I don’t understand the original motivation” and going back to the start.
I’m using the term downside-focused to refer to value systems that in practice (given what we know about the world) primarily recommend working on interventions that make bad things less likely.
If I understand it, the project is something like “how do your priorities differ if you focus on reducing bad things over promoting good things?” but I don’t see how you can on to draw anything conclusions about that because downside (as well as upside) morality covers so many different things.
Here are 4 different ways you might come to the conclusion you should work on making bad things less likely. Quoting Ord:
“Absolute Negative Utilitarianism (NU). Only suffering counts.
Lexical NU. Suffering and happiness both count, but no amount of happiness (regardless of how great) can outweigh any amount of suffering (no matter how small).
Lexical Threshold NU. Suffering and happiness both count, but there is some amount of suffering that no amount of happiness can outweigh.
Weak NU. Suffering and happiness both count, but suffering counts more. There is an exchange rate between suffering and happiness or perhaps some nonlinear function which shows how much happiness would be required to outweigh any given amount of suffering.”
This would lead you to give more weight to suffering at the theoretical level. Or, fifth, you could be a classical utilitarian—happiness and suffering count equally—and decide, for practical reasons, to focus on reducing suffering.
As I see it, the problem is that all of them will and do recommend different priorities. A lexical or absolute NU should, perhaps, really be trying to blow up the world. Weak NU and classical U will be interested in promoting happiness too and might want humanity to survive and conquer the stars. It doesn’t seem useful or possible to conduct analysis along the lines of “this is what you should do if you’re more interested in reducing bad things” because the views within downside focused morality won’t agree with what you should do or why you should do it.
More broadly, this division seems unhelpful. Suppose we we have four people in a room, a lexical NU, a very weak NU, a classical U, and a lexical positive utilitarian (any happiness outweighs all suffer). It seems like, on your view, the first two should be downside focused and the latter two upside focused. However, it could be both the classical U and the very weak NU agree that the best way to do good is focusing suffering reduction, so they’re downside. Or they could agree the best way is happiness promotion, so they’re upside. In fact, the weak NU and classical U have much more in common with each other—they will nearly always agree on the value of states of affairs—than either of them do with the lexical NU or lexical PU. Hence they should really stick together and it doesn’t seem trying to force views into those that, practically speaking, focus on producing good or reducing bad, is a category that helps our analysis.
It might be useful to hear you say why you think this is a useful distinction.
If I understand it, the project is something like “how do your priorities differ if you focus on reducing bad things over promoting good things?”
This sounds accurate, but I was thinking of it with empirical cause prioritization already factored in. For instance, while a view like classical utilitarianism can be called “symmetrical” when it comes to normatively prioritizing good things and bad things (always with some element of arbitrariness because there are no “proper units” of happiness and suffering), in practice the view turns out to be upside-focused because, given our empirical situation, there is more room for creating happiness/good things than there is future expected suffering left to prevent. (Cf. the astronomical waste argument.)
This would go the other way if we had good reason to believe that the future will be very bad, but I think the classical utilitarians who are optimistic about the future (given their values) are right to be optimistic: If you count the creation of extreme happiness as not-a-lot-less important than the prevention of extreme suffering, then the future will in expectation be very valuable according to your values (see footnote [3]).
but I don’t see how you can on to draw anything conclusions about that because downside (as well as upside) morality covers so many different things.
My thinking is that when it comes to interventions that affect the long-term future, different normative views tend to converge roughly into two large clusters for the object-level interventions they recommend. If the future will be good for your value system, reducing exinction risks and existential risk related to “not realizing full potential” will be most important. If your value system makes it harder to attain vast amounts of positive value through bringing about large (in terms of time and/or space) utopian futures, then you want to focus specifically on (cooperative ways of) reducing suffering risks or downside risks generally. The cut-off point is determined by what the epistemically proper degree of optimism or pessimism is with regard to the quality of the long-term future, and to what extent we can have an impact on that. Meaning, if we had reason to believe that the future will be very negative and that effort to make the future contain vast amounts of happiness are very very very unlikely to ever work, then even classical utilitarianism would count as “downside-focused” according to my classification.
Some normative views simply don’t place much importance on creating new happy people, in which case they kind of come out as downside-focused by default (except for the consideration I mention in footnote 2). (If these views give a lot of weight to currently existing people, then they can be both downside-focused and give high priority to averting extinction risks, which is something I pointed out in the third-last paragraph in the section on extinction risks.)
Out of the five examples you mentioned, I’d say they fall into the two clusters as follows:
Downside-focused: absolute NU, lexical NU, lexical threshold NU and a “negative-leaning” utilitarianism that is sufficiently negative-leaning to counteract our empirical assessment of how much easier it will be to create happiness than to prevent suffering. The rest is upside-focused (maybe with some stuck at “could go either way”). How much is “sufficiently negative-leaning”? It becomes tricky because there are not really any “proper units” of happiness and suffering, so we have to first specify what we are comparing. See footnote 3: My own view is that the cut-off is maybe very roughly at around 100, but I mentioned “100 or maybe 1,000” to be on the conservative side. And these refer to comparing extreme happiness to extreme suffering. Needless to say, it is hard to predict the future and we should take such numbers with a lot of caution, and it seems legitimate for people to disagree. Though I should qualify that a bit: Say, if someone thinks that classical utilitarians should not work on extinction risk reduction because the future is too negative, or if someone thinks even strongly negative-leaning consequentialists should have the same ranking of priorities as classical utilitarians because the future is so very positive, then both of these have to explain away strong expert disagreement (at least within EA; I think outside of EA, people’s predictions are all over the place, with economists generally being more optimistic).
Lastly, I don’t think proponents of any value system should start to sabotage other people’s efforts, especially not since there are other ways to create value according to your own value systems that is altogether much more positive sum. Note that this – the dangers of naive/Machiavellian consequentialism – is a very general problem that reaches far deeper than just value differences. Say you have two EAs who both think creating happiness is 1/10th as important as reducing suffering. One is optimistic about the future, the other has become more pessimistic after reading about some new arguments. They try to talk out the disagreement, but do not reach agreement. Should the second EA now start to sabotage the efforts of the first one, or vice versa? That seems ill-advised; no good can come from going down that path.
Hello Lukas,
I’m struggling to wrap my head around the difference between upside and downside focused morality. I tried to read the rest of the document, but I kept thinking “hold on, I don’t understand the original motivation” and going back to the start.
If I understand it, the project is something like “how do your priorities differ if you focus on reducing bad things over promoting good things?” but I don’t see how you can on to draw anything conclusions about that because downside (as well as upside) morality covers so many different things.
Here are 4 different ways you might come to the conclusion you should work on making bad things less likely. Quoting Ord:
“Absolute Negative Utilitarianism (NU). Only suffering counts.
Lexical NU. Suffering and happiness both count, but no amount of happiness (regardless of how great) can outweigh any amount of suffering (no matter how small).
Lexical Threshold NU. Suffering and happiness both count, but there is some amount of suffering that no amount of happiness can outweigh.
Weak NU. Suffering and happiness both count, but suffering counts more. There is an exchange rate between suffering and happiness or perhaps some nonlinear function which shows how much happiness would be required to outweigh any given amount of suffering.”
This would lead you to give more weight to suffering at the theoretical level. Or, fifth, you could be a classical utilitarian—happiness and suffering count equally—and decide, for practical reasons, to focus on reducing suffering.
As I see it, the problem is that all of them will and do recommend different priorities. A lexical or absolute NU should, perhaps, really be trying to blow up the world. Weak NU and classical U will be interested in promoting happiness too and might want humanity to survive and conquer the stars. It doesn’t seem useful or possible to conduct analysis along the lines of “this is what you should do if you’re more interested in reducing bad things” because the views within downside focused morality won’t agree with what you should do or why you should do it.
More broadly, this division seems unhelpful. Suppose we we have four people in a room, a lexical NU, a very weak NU, a classical U, and a lexical positive utilitarian (any happiness outweighs all suffer). It seems like, on your view, the first two should be downside focused and the latter two upside focused. However, it could be both the classical U and the very weak NU agree that the best way to do good is focusing suffering reduction, so they’re downside. Or they could agree the best way is happiness promotion, so they’re upside. In fact, the weak NU and classical U have much more in common with each other—they will nearly always agree on the value of states of affairs—than either of them do with the lexical NU or lexical PU. Hence they should really stick together and it doesn’t seem trying to force views into those that, practically speaking, focus on producing good or reducing bad, is a category that helps our analysis.
It might be useful to hear you say why you think this is a useful distinction.
This sounds accurate, but I was thinking of it with empirical cause prioritization already factored in. For instance, while a view like classical utilitarianism can be called “symmetrical” when it comes to normatively prioritizing good things and bad things (always with some element of arbitrariness because there are no “proper units” of happiness and suffering), in practice the view turns out to be upside-focused because, given our empirical situation, there is more room for creating happiness/good things than there is future expected suffering left to prevent. (Cf. the astronomical waste argument.)
This would go the other way if we had good reason to believe that the future will be very bad, but I think the classical utilitarians who are optimistic about the future (given their values) are right to be optimistic: If you count the creation of extreme happiness as not-a-lot-less important than the prevention of extreme suffering, then the future will in expectation be very valuable according to your values (see footnote [3]).
My thinking is that when it comes to interventions that affect the long-term future, different normative views tend to converge roughly into two large clusters for the object-level interventions they recommend. If the future will be good for your value system, reducing exinction risks and existential risk related to “not realizing full potential” will be most important. If your value system makes it harder to attain vast amounts of positive value through bringing about large (in terms of time and/or space) utopian futures, then you want to focus specifically on (cooperative ways of) reducing suffering risks or downside risks generally. The cut-off point is determined by what the epistemically proper degree of optimism or pessimism is with regard to the quality of the long-term future, and to what extent we can have an impact on that. Meaning, if we had reason to believe that the future will be very negative and that effort to make the future contain vast amounts of happiness are very very very unlikely to ever work, then even classical utilitarianism would count as “downside-focused” according to my classification.
Some normative views simply don’t place much importance on creating new happy people, in which case they kind of come out as downside-focused by default (except for the consideration I mention in footnote 2). (If these views give a lot of weight to currently existing people, then they can be both downside-focused and give high priority to averting extinction risks, which is something I pointed out in the third-last paragraph in the section on extinction risks.)
Out of the five examples you mentioned, I’d say they fall into the two clusters as follows: Downside-focused: absolute NU, lexical NU, lexical threshold NU and a “negative-leaning” utilitarianism that is sufficiently negative-leaning to counteract our empirical assessment of how much easier it will be to create happiness than to prevent suffering. The rest is upside-focused (maybe with some stuck at “could go either way”). How much is “sufficiently negative-leaning”? It becomes tricky because there are not really any “proper units” of happiness and suffering, so we have to first specify what we are comparing. See footnote 3: My own view is that the cut-off is maybe very roughly at around 100, but I mentioned “100 or maybe 1,000” to be on the conservative side. And these refer to comparing extreme happiness to extreme suffering. Needless to say, it is hard to predict the future and we should take such numbers with a lot of caution, and it seems legitimate for people to disagree. Though I should qualify that a bit: Say, if someone thinks that classical utilitarians should not work on extinction risk reduction because the future is too negative, or if someone thinks even strongly negative-leaning consequentialists should have the same ranking of priorities as classical utilitarians because the future is so very positive, then both of these have to explain away strong expert disagreement (at least within EA; I think outside of EA, people’s predictions are all over the place, with economists generally being more optimistic).
Lastly, I don’t think proponents of any value system should start to sabotage other people’s efforts, especially not since there are other ways to create value according to your own value systems that is altogether much more positive sum. Note that this – the dangers of naive/Machiavellian consequentialism – is a very general problem that reaches far deeper than just value differences. Say you have two EAs who both think creating happiness is 1/10th as important as reducing suffering. One is optimistic about the future, the other has become more pessimistic after reading about some new arguments. They try to talk out the disagreement, but do not reach agreement. Should the second EA now start to sabotage the efforts of the first one, or vice versa? That seems ill-advised; no good can come from going down that path.
Just FYI, Simon Knutsson has responded to Toby Ord.