Hi—thanks for writing this! A few things regarding your references to WWOTF:
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extremesuffering and extremely bad lives.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)
In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty—under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse. Again, the thrust of the relevant discussion happens in the section “The Case for Optimism”: “on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure… If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads.”
Of course, there’s only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
It really isn’t clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the “intuition of neutrality.” In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don’t mean to pick on you in particular!) devoted to those three views. And I’m not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that’s been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
Of course, there’s only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
All things considered, it seems to me that the greater likelihood of eutopia is the bigger consideration. This gives us some reason to think the expected value of the future is positive. We have grounds for hope.
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
You seem to be using a different definition of the Asymmetry than Magnus is, and I’m not sure it’s a much more common one. On Magnus’s definition (which is also used by e.g. Chappell; Holtug, Nils (2004), “Person-affecting Moralities”; and McMahan (1981), “Problems of Population Theory”), bringing into existence lives that have “positive wellbeing” is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn’t imply the intuition of neutrality, and so isn’t vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.
If bringing into existence lives that have positive wellbeing is at best neutral (and presumably strongly negative for lives with negative wellbeing) — why have children at all? Is it their instrumental value they bring in their lives that we’re after under this philosophy? (Sorry, I’m almost surely missing something very basic here — not a philosopher.)
on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure...
I’m struggling to interpret this statement? What is the underlying sense in which pain and pleasure are measures in the same units and are thus ‘equal even though the pain is morally weighted more highly?’
If happiness and suffering were factual attributes measurable on one ratio scale, one way to distinguish the views would be straightforward: weak negative views could give greater moral or evaluative weight to one factual unit of suffering than one factual unit of happiness, while non-negative views could give them equal weight. It would then be accurate to call such weak negative views ‘asymmetric’ and such non-negative views ‘symmetric.’ However, such measurability is highly controversial...
Maybe you have some ideas and intuition into how to think about this?
One way of thinking about this would be in relation to self-reported life satisfaction.
Consider someone who rates their life satisfaction at 1⁄10, citing extreme hunger. Now suppose you give a certain amount of food to bring them up to 2⁄10. You have essentially reduced suffering by 1 unit.
Now consider someone who rates their satisfaction at 10⁄10, believing that their life could not be any better. Then consider that you do something for them (e.g. you give them a wonderful present) and they realise that their life is even better than before and retrospectively think they have actually increased from 9⁄10 to 10⁄10. We might say that happiness has been increased by one unit (I take this ‘retrospection’ approach to try to avoid that I might also be ‘reducing suffering’ here by implying there was no suffering at all to begin with—not sure if it really works, or if it’s actually necessary).
If someone finds it more important to bring the person to 2⁄10 from 1⁄10 than it is to bring the other person to 10⁄10 from 9⁄10 one might be weighting removing a unit of suffering as more important than creating a unit of happiness.
But how would I know that we were comparing the same ‘amount of change’ in these cases?
What makes going from 1⁄10 to 2⁄10 constitute “one unit” and going from 9⁄10 to 10⁄10 as also “one unit”?
And if these are not the same ‘unit’ then how do I know that the person who finds the first movement more valuable ‘cares about suffering more’? Instead it might be that a 1-2 movement is just “a larger quantity” than a 9-10 movement.
In practice you would have to make an assumption that people generally report on the same scale. There is some evidence from happiness research that this is the case (I think) but I’m not sure where this has got to.
From your original question I thought you were essentially trying to understand, in theory, what weighting one unit of pain as greater than one unit of pleasure might mean. As per my example above, one could prioritise a one unit change on a self-reported scale if the change occurs at a lower position on the scale (assuming different respondents are using the same scale).
Another perspective is that one could consider two changes that are the same in “intensity”, but one involves alleviating suffering (giving some food to a starving person) and one involves making someone happier (giving someone a gift) - and then prioritising giving someone the food. For these two actions to be the same in intensity, you can’t be giving all that much food to the starving person because it will generally be easy to alleviate a large amount of suffering with a ‘small’ amount of food, but relatively difficult to increase happiness of someone who isn’t suffering much, even with an expensive gift.
Not sure if I’m answering your questions at all but still interesting to think through!
Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y.
This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine.
This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.
Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don’t know about Bykvist).
Hi—thanks for writing this! A few things regarding your references to WWOTF:
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)
In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty—under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse. Again, the thrust of the relevant discussion happens in the section “The Case for Optimism”: “on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure… If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads.”
Of course, there’s only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
It really isn’t clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the “intuition of neutrality.” In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don’t mean to pick on you in particular!) devoted to those three views. And I’m not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that’s been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.
You seem to be using a different definition of the Asymmetry than Magnus is, and I’m not sure it’s a much more common one. On Magnus’s definition (which is also used by e.g. Chappell; Holtug, Nils (2004), “Person-affecting Moralities”; and McMahan (1981), “Problems of Population Theory”), bringing into existence lives that have “positive wellbeing” is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn’t imply the intuition of neutrality, and so isn’t vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.
If bringing into existence lives that have positive wellbeing is at best neutral (and presumably strongly negative for lives with negative wellbeing) — why have children at all? Is it their instrumental value they bring in their lives that we’re after under this philosophy? (Sorry, I’m almost surely missing something very basic here — not a philosopher.)
I’m struggling to interpret this statement? What is the underlying sense in which pain and pleasure are measures in the same units and are thus ‘equal even though the pain is morally weighted more highly?’
Knutson states the problem well IMO [1]
Maybe you have some ideas and intuition into how to think about this?
Thanks MSJ for this reference
One way of thinking about this would be in relation to self-reported life satisfaction.
Consider someone who rates their life satisfaction at 1⁄10, citing extreme hunger. Now suppose you give a certain amount of food to bring them up to 2⁄10. You have essentially reduced suffering by 1 unit.
Now consider someone who rates their satisfaction at 10⁄10, believing that their life could not be any better. Then consider that you do something for them (e.g. you give them a wonderful present) and they realise that their life is even better than before and retrospectively think they have actually increased from 9⁄10 to 10⁄10. We might say that happiness has been increased by one unit (I take this ‘retrospection’ approach to try to avoid that I might also be ‘reducing suffering’ here by implying there was no suffering at all to begin with—not sure if it really works, or if it’s actually necessary).
If someone finds it more important to bring the person to 2⁄10 from 1⁄10 than it is to bring the other person to 10⁄10 from 9⁄10 one might be weighting removing a unit of suffering as more important than creating a unit of happiness.
But how would I know that we were comparing the same ‘amount of change’ in these cases?
What makes going from 1⁄10 to 2⁄10 constitute “one unit” and going from 9⁄10 to 10⁄10 as also “one unit”?
And if these are not the same ‘unit’ then how do I know that the person who finds the first movement more valuable ‘cares about suffering more’? Instead it might be that a 1-2 movement is just “a larger quantity” than a 9-10 movement.
In practice you would have to make an assumption that people generally report on the same scale. There is some evidence from happiness research that this is the case (I think) but I’m not sure where this has got to.
From your original question I thought you were essentially trying to understand, in theory, what weighting one unit of pain as greater than one unit of pleasure might mean. As per my example above, one could prioritise a one unit change on a self-reported scale if the change occurs at a lower position on the scale (assuming different respondents are using the same scale).
Another perspective is that one could consider two changes that are the same in “intensity”, but one involves alleviating suffering (giving some food to a starving person) and one involves making someone happier (giving someone a gift) - and then prioritising giving someone the food. For these two actions to be the same in intensity, you can’t be giving all that much food to the starving person because it will generally be easy to alleviate a large amount of suffering with a ‘small’ amount of food, but relatively difficult to increase happiness of someone who isn’t suffering much, even with an expensive gift.
Not sure if I’m answering your questions at all but still interesting to think through!
Thank you for clarifying!
This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine.
Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don’t know about Bykvist).