>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
Of course, there’s only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
All things considered, it seems to me that the greater likelihood of eutopia is the bigger consideration. This gives us some reason to think the expected value of the future is positive. We have grounds for hope.
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.