While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there’s a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don’t have this implication, such as preference-based and upside-focused views).
It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)
I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.
Do you think this is highly implausible even if you account for:
the opportunities to reduce other people’s extreme suffering that a person committing suicide would forego,
the extreme suffering of one’s loved ones this would probably increase,
plausible views of personal identity on which risking the extreme suffering of one’s future self is ethically similar to, if not the same as, risking it for someone else,
relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as “real” as the large measure without, and
the fact that even non-negative utilitarian views will probably consider some forms of suffering so bad, that small risks of them would outweigh any upsides that a typical human experiences, for oneself (ignoring effects on other people)?
These seem like good objections to me, but overall I still find it pretty implausible. A hermit who leads a happy life alone on an island (and has read lots of books about personal identity and otherwise acquired a lot of wisdom) probably wouldn’t want to commit suicide unless the amount of expected suffering in their future was pretty significant.
(I didn’t understand, or disagree with, the fourth point.)
[Warning: potentially disturbing discussion of suicide and extreme suffering.]
I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausiblyfalse in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and as hinted by Jonas, even given pure NU, trying to commit suicide is likely very bad in most cases, Vinding, 2020, 8.2.)
Another point one may raise is that our intuitions cannot necessarily be trusted when it comes to these issues, e.g. because we have an optimism bias (which suggests that we may, at an intuitive level, wholly disregard these tail risks); because we evolved to prefer existence almost no matter the (expected) costs (Vinding, 2020, 7.11); and because we intuitively have a very poor sense of howbad the states of suffering in question are (cf. ibid., 8.12).
Intuitions also differ on this matter. One EA told me that he thinks we are absolutely crazy for staying alive (disregarding our potential to reduce suffering), especially since we have no off-switch in case things go terribly wrong. This may be a reason to be less sure of one’s immediate intuitions on this matter, regardless of what those intuitions might be.
I also think it is important to highlight, as Tobias does, that there are many alternative views that can accommodate the intuition that the suicide in question would be bad, apart from a symmetry between happiness and suffering, or upside-focused views more generally. For example, there is a wide variety of harm-focused views, including but not restricted to negative consequentialistviews in particular, that will deem such a suicide bad, and they may do so for many different reasons, e.g. because they consider one or more of the following an even greater harm (in expectation) than the expected suffering averted: the frustration of preferences, premature death, lost potential, the loss of hard-won knowledge, etc. (I say a bit more about this here and here.)
Relatedly, one should be careful about drawing overly general conclusions from this case. For example, the case of suicide does not necessarily say much about different population-ethical views, nor about the moral importance of creating happiness vs. reducing suffering in general. After all, as Tobias notes, quite a number of views will say that premature deaths are mostly bad while still endorsing the Asymmetry in population ethics, e.g. due to conditional interests (St. Jules, 2019; Frick, 2020). And some views that reject a symmetry between suffering and happiness will still consider death very bad on the basis of pluralist moral values (cf. Wolf, 1997, VIII; Mayerfeld, 1996, “Life and Death”; 1999, p. 160; Gloor, 2017; 1, 4.3, 5).
Similar points can be made about intra- vs. interpersonal tradeoffs: one may think that it is acceptable to risk extreme suffering for oneself without thinking that it is acceptable to expose others to such a risk for the sake of creating a positive good for them, such as happiness (Shiffrin, 1999; Ryder, 2001; Benatar & Wasserman, 2015, “The Risk of Serious Harm”; Harnad, 2016; Vinding, 2020, 3.2).
(Edit: And note that a purely welfarist view entailing a moral symmetry between happiness and suffering would actually be a rather fragile basis on which to rest the intuition in question, since it would imply that people should painlessly end their lives if their expected future well-being were just below “hedonic zero”, even if they very much wanted to keep on living (e.g. because of a strong drive to accomplish a given goal). Another counterintuitive theoretical implication of such a view is that one would be obliged to end one’s life, even in the most excruciating way, if it in turn created a new, sufficiently happy being, cf. the replacement argument discussed in Jamieson, 1984; Pluhar, 1990. I believe many would find these implications implausible as well, even on a purely theoretical level, suggesting that what is counterintuitive here is the complete reliance on a purely welfarist view — not necessarily the focus on reducing suffering over increasing happiness.)
While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there’s a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don’t have this implication, such as preference-based and upside-focused views).
It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)
I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.
Do you think this is highly implausible even if you account for:
the opportunities to reduce other people’s extreme suffering that a person committing suicide would forego,
the extreme suffering of one’s loved ones this would probably increase,
plausible views of personal identity on which risking the extreme suffering of one’s future self is ethically similar to, if not the same as, risking it for someone else,
relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as “real” as the large measure without, and
the fact that even non-negative utilitarian views will probably consider some forms of suffering so bad, that small risks of them would outweigh any upsides that a typical human experiences, for oneself (ignoring effects on other people)?
These seem like good objections to me, but overall I still find it pretty implausible. A hermit who leads a happy life alone on an island (and has read lots of books about personal identity and otherwise acquired a lot of wisdom) probably wouldn’t want to commit suicide unless the amount of expected suffering in their future was pretty significant.
(I didn’t understand, or disagree with, the fourth point.)
[Warning: potentially disturbing discussion of suicide and extreme suffering.]
I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausibly false in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and as hinted by Jonas, even given pure NU, trying to commit suicide is likely very bad in most cases, Vinding, 2020, 8.2.)
Another point one may raise is that our intuitions cannot necessarily be trusted when it comes to these issues, e.g. because we have an optimism bias (which suggests that we may, at an intuitive level, wholly disregard these tail risks); because we evolved to prefer existence almost no matter the (expected) costs (Vinding, 2020, 7.11); and because we intuitively have a very poor sense of how bad the states of suffering in question are (cf. ibid., 8.12).
Intuitions also differ on this matter. One EA told me that he thinks we are absolutely crazy for staying alive (disregarding our potential to reduce suffering), especially since we have no off-switch in case things go terribly wrong. This may be a reason to be less sure of one’s immediate intuitions on this matter, regardless of what those intuitions might be.
I also think it is important to highlight, as Tobias does, that there are many alternative views that can accommodate the intuition that the suicide in question would be bad, apart from a symmetry between happiness and suffering, or upside-focused views more generally. For example, there is a wide variety of harm-focused views, including but not restricted to negative consequentialist views in particular, that will deem such a suicide bad, and they may do so for many different reasons, e.g. because they consider one or more of the following an even greater harm (in expectation) than the expected suffering averted: the frustration of preferences, premature death, lost potential, the loss of hard-won knowledge, etc. (I say a bit more about this here and here.)
Relatedly, one should be careful about drawing overly general conclusions from this case. For example, the case of suicide does not necessarily say much about different population-ethical views, nor about the moral importance of creating happiness vs. reducing suffering in general. After all, as Tobias notes, quite a number of views will say that premature deaths are mostly bad while still endorsing the Asymmetry in population ethics, e.g. due to conditional interests (St. Jules, 2019; Frick, 2020). And some views that reject a symmetry between suffering and happiness will still consider death very bad on the basis of pluralist moral values (cf. Wolf, 1997, VIII; Mayerfeld, 1996, “Life and Death”; 1999, p. 160; Gloor, 2017; 1, 4.3, 5).
Similar points can be made about intra- vs. interpersonal tradeoffs: one may think that it is acceptable to risk extreme suffering for oneself without thinking that it is acceptable to expose others to such a risk for the sake of creating a positive good for them, such as happiness (Shiffrin, 1999; Ryder, 2001; Benatar & Wasserman, 2015, “The Risk of Serious Harm”; Harnad, 2016; Vinding, 2020, 3.2).
(Edit: And note that a purely welfarist view entailing a moral symmetry between happiness and suffering would actually be a rather fragile basis on which to rest the intuition in question, since it would imply that people should painlessly end their lives if their expected future well-being were just below “hedonic zero”, even if they very much wanted to keep on living (e.g. because of a strong drive to accomplish a given goal). Another counterintuitive theoretical implication of such a view is that one would be obliged to end one’s life, even in the most excruciating way, if it in turn created a new, sufficiently happy being, cf. the replacement argument discussed in Jamieson, 1984; Pluhar, 1990. I believe many would find these implications implausible as well, even on a purely theoretical level, suggesting that what is counterintuitive here is the complete reliance on a purely welfarist view — not necessarily the focus on reducing suffering over increasing happiness.)
Jonas, I am curious how are you dealing with the above implication?
As I said, mainly by assigning more credence to other views.