this isnât strong evidence against the underlying truth of suffering-focused views. Consider scenarios where the only options are (1) a thousand people tortured forever with no positive wellbeing whatsoever or (2) painless annihilation of all sentience. Annihilation seems obviously preferable.
I agree that itâs obviously true that annihilation is preferable to some outcomes. I understand the objection as being more specific, targeting claims like:
(Ideal): annihilation is ideally desirable in the sense that itâs better (in expectation) thanany other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
or
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I donât think that most of the proposed âmoderating factorsâ do much to soften the blow.
I grant your âvirtual impossibilityâ point that annihilation is not really an available option (to us, at least; future SAI might be another matter). But the objection is to the plausibility of the in principle verdicts entailed here, much as I would object to an account of the harm of death that implies that it would do no harm to kill me in my sleep (the force of which objection would not be undermined by my actually being invincible).
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But Iâm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like youâd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But thatâs very hard to credit, given the above-quoted case where annihilation is âobviously preferableâ.)
The âirreversibilityâ consideration does seem stronger here, but I think ultimately rests on a much more radical form of moral uncertainty: itâs not just that you should give some (minority) weight to other views, but that you should give significant weight to the possibility that a more ideally rational agent would give almost no weight to such a pro-annihilationist view as this. Some kind of anti-hubris norm along these lines should probably take priority over all of our first-order views. Iâm not sure what the best full development of the idea would look like, though. (It seems pretty different from ordinary treatments of moral uncertainty!) Pointers to related discussion would be welcome!
I think a more promising form of suffering-focused ethics would explore some form of âvariable valueâ approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. Iâm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
(Ideal): annihilation is ideally desirable in the sense that itâs better (in expectation) thanany other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just donât think it is overwhelming evidence against the truth of the view.
Perhaps a reframing is âwould this still seem like a ~reductio conditional on a long reflection type scenario that results in literally everyone agreeing that itâs desirable/âgood?â
And I donât mean this in the sense of just âassume that the conclusion is ground truthââI mean it in the sense of âdoes this look as bad when it doesnât involve anyone doing anything involuntary?â to try to tease apart whether intuitions around annihilation per se are to any extent âjustâ a proxy for guarding against the use of force/âcoercion/âlack of consent.
Another way to flip the âforceâ issue would be âsuppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesnât care about their own existence or have valenced experience)?â
Iâll note that I can easily dream up scenarios where we should force people, even a whole society, to do something against its will. I know some will disagree, but I think we should (at least in principle, implementation is messy) forcibly prevent people from totally voluntarily being tortured (assume away masochismâlike suppose the person just has a preference for suffering that results in pure suffering with no âsecretly liking itâ along for the ride)
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I donât think that most of the proposed âmoderating factorsâ do much to soften the blow.
This one I more eagerly bite the bullet on, it just straightforwardly seems true to me that this is possible in principle (i.e., such a world could/âwould be genuinely very bad). And relevantly, orthodox utilitarianism also endorses this in principle, some of the time (i.e. just add up the utils, in principle one suffering monster can have enough negative utility)!
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But Iâm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like youâd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But thatâs very hard to credit, given the above-quoted case where annihilation is âobviously preferableâ.)
I donât have an answer to this (yet) because my sense is just that figuring out how to make overall assessments of probability distributions on various moral views is just extremely hard in general and not âsolvedâ.
This actually reminds me of a shortform post I wrote a while back. Let me just drop a screenshot to make my life a bit easier in terms of formatting nested quotes:
I think this^ brief discussion of how the two sides might look at the same issue gets at the fundamental problem/ânon-obviousness of the matter pretty well.
I think a more promising form of suffering-focused ethics would explore some form of âvariable valueâ approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. Iâm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
I think we may just have very different background stances on like how to do ethics. I think that we should more strongly decouple the project of abstract object level truth seeking from the project of figuring out a code of norms/ârules/âde facto ethics that satisfies all our many constraints and preferences today. The thing you propose seems promising to me as like a coalitional bargaining proposal for guiding action in the near future, but not especially promising as a candidate for abstract moral truth.
On your last point, Iâm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which itâs preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/âfundamental/âprincipled levels. By contrast, I could imagine some more complex variable-value/âthreshold approach to lexicality turning out to have at least some credibility (even if Iâm overall more inclined to think that the sorts of intuitions youâre drawing upon are better captured at the âinstrumental heuristicâ level).
On moral uncertainty: I agree that bargaining-style approaches seem better than âmaximizing expected choiceworthinessâ approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isnât true that âorthodox utilitarianism also endorses this in principleâ, because a key part of the case description was âno matter what else happens to anyone elseâ. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. Itâs strictly anti-absolutist in this sense, and I think thatâs a theoretically plausible and desirable property that your view is missing.
Another way to flip the âforceâ issue would be âsuppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesnât care about their own existence or have valenced experience)?â
I donât think itâs helpful to focus on external agents imposing their will on others, because thatâs going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/âor comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didnât want it, but wanting it doesnât make it good.)
@Richard Y Chappellđ¸ what do you think of Aaronâs response below? I am using this comment to flag that the discussion you two are having seems very important to me, and I look forward to seeing your reply to Aaronâs points.
Regarding the âworld-destructionâ reductio:
I agree that itâs obviously true that annihilation is preferable to some outcomes. I understand the objection as being more specific, targeting claims like:
(Ideal): annihilation is ideally desirable in the sense that itâs better (in expectation) than any other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
or
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I donât think that most of the proposed âmoderating factorsâ do much to soften the blow.
I grant your âvirtual impossibilityâ point that annihilation is not really an available option (to us, at least; future SAI might be another matter). But the objection is to the plausibility of the in principle verdicts entailed here, much as I would object to an account of the harm of death that implies that it would do no harm to kill me in my sleep (the force of which objection would not be undermined by my actually being invincible).
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But Iâm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like youâd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But thatâs very hard to credit, given the above-quoted case where annihilation is âobviously preferableâ.)
The âirreversibilityâ consideration does seem stronger here, but I think ultimately rests on a much more radical form of moral uncertainty: itâs not just that you should give some (minority) weight to other views, but that you should give significant weight to the possibility that a more ideally rational agent would give almost no weight to such a pro-annihilationist view as this. Some kind of anti-hubris norm along these lines should probably take priority over all of our first-order views. Iâm not sure what the best full development of the idea would look like, though. (It seems pretty different from ordinary treatments of moral uncertainty!) Pointers to related discussion would be welcome!
I think a more promising form of suffering-focused ethics would explore some form of âvariable valueâ approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. Iâm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
Thanks!
Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just donât think it is overwhelming evidence against the truth of the view.
Perhaps a reframing is âwould this still seem like a ~reductio conditional on a long reflection type scenario that results in literally everyone agreeing that itâs desirable/âgood?â
And I donât mean this in the sense of just âassume that the conclusion is ground truthââI mean it in the sense of âdoes this look as bad when it doesnât involve anyone doing anything involuntary?â to try to tease apart whether intuitions around annihilation per se are to any extent âjustâ a proxy for guarding against the use of force/âcoercion/âlack of consent.
Another way to flip the âforceâ issue would be âsuppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesnât care about their own existence or have valenced experience)?â
Iâll note that I can easily dream up scenarios where we should force people, even a whole society, to do something against its will. I know some will disagree, but I think we should (at least in principle, implementation is messy) forcibly prevent people from totally voluntarily being tortured (assume away masochismâlike suppose the person just has a preference for suffering that results in pure suffering with no âsecretly liking itâ along for the ride)
This one I more eagerly bite the bullet on, it just straightforwardly seems true to me that this is possible in principle (i.e., such a world could/âwould be genuinely very bad). And relevantly, orthodox utilitarianism also endorses this in principle, some of the time (i.e. just add up the utils, in principle one suffering monster can have enough negative utility)!
I donât have an answer to this (yet) because my sense is just that figuring out how to make overall assessments of probability distributions on various moral views is just extremely hard in general and not âsolvedâ.
This actually reminds me of a shortform post I wrote a while back. Let me just drop a screenshot to make my life a bit easier in terms of formatting nested quotes:
I think this^ brief discussion of how the two sides might look at the same issue gets at the fundamental problem/ânon-obviousness of the matter pretty well.
I think we may just have very different background stances on like how to do ethics. I think that we should more strongly decouple the project of abstract object level truth seeking from the project of figuring out a code of norms/ârules/âde facto ethics that satisfies all our many constraints and preferences today. The thing you propose seems promising to me as like a coalitional bargaining proposal for guiding action in the near future, but not especially promising as a candidate for abstract moral truth.
Thanks for your reply! Working backwards...
On your last point, Iâm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which itâs preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/âfundamental/âprincipled levels. By contrast, I could imagine some more complex variable-value/âthreshold approach to lexicality turning out to have at least some credibility (even if Iâm overall more inclined to think that the sorts of intuitions youâre drawing upon are better captured at the âinstrumental heuristicâ level).
On moral uncertainty: I agree that bargaining-style approaches seem better than âmaximizing expected choiceworthinessâ approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isnât true that âorthodox utilitarianism also endorses this in principleâ, because a key part of the case description was âno matter what else happens to anyone elseâ. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. Itâs strictly anti-absolutist in this sense, and I think thatâs a theoretically plausible and desirable property that your view is missing.
I donât think itâs helpful to focus on external agents imposing their will on others, because thatâs going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/âor comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didnât want it, but wanting it doesnât make it good.)
@Richard Y Chappellđ¸ what do you think of Aaronâs response below? I am using this comment to flag that the discussion you two are having seems very important to me, and I look forward to seeing your reply to Aaronâs points.