(Ideal): annihilation is ideally desirable in the sense that it’s better (in expectation) thanany other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)
Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just don’t think it is overwhelming evidence against the truth of the view.
Perhaps a reframing is “would this still seem like a ~reductio conditional on a long reflection type scenario that results in literally everyone agreeing that it’s desirable/good?”
And I don’t mean this in the sense of just “assume that the conclusion is ground truth”—I mean it in the sense of “does this look as bad when it doesn’t involve anyone doing anything involuntary?” to try to tease apart whether intuitions around annihilation per se are to any extent “just” a proxy for guarding against the use of force/coercion/lack of consent.
Another way to flip the ‘force’ issue would be “suppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesn’t care about their own existence or have valenced experience)?”
I’ll note that I can easily dream up scenarios where we should force people, even a whole society, to do something against its will. I know some will disagree, but I think we should (at least in principle, implementation is messy) forcibly prevent people from totally voluntarily being tortured (assume away masochism—like suppose the person just has a preference for suffering that results in pure suffering with no ‘secretly liking it’ along for the ride)
(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.
These strike me as extremely incredible claims, and I don’t think that most of the proposed “moderating factors” do much to soften the blow.
This one I more eagerly bite the bullet on, it just straightforwardly seems true to me that this is possible in principle (i.e., such a world could/would be genuinely very bad). And relevantly, orthodox utilitarianism also endorses this in principle, some of the time (i.e. just add up the utils, in principle one suffering monster can have enough negative utility)!
Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But I’m not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like you’d instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But that’s very hard to credit, given the above-quoted case where annihilation is “obviously preferable”.)
I don’t have an answer to this (yet) because my sense is just that figuring out how to make overall assessments of probability distributions on various moral views is just extremely hard in general and not “solved”.
This actually reminds me of a shortform post I wrote a while back. Let me just drop a screenshot to make my life a bit easier in terms of formatting nested quotes:
I think this^ brief discussion of how the two sides might look at the same issue gets at the fundamental problem/non-obviousness of the matter pretty well.
I think a more promising form of suffering-focused ethics would explore some form of “variable value” approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. I’m not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.
I think we may just have very different background stances on like how to do ethics. I think that we should more strongly decouple the project of abstract object level truth seeking from the project of figuring out a code of norms/rules/de facto ethics that satisfies all our many constraints and preferences today. The thing you propose seems promising to me as like a coalitional bargaining proposal for guiding action in the near future, but not especially promising as a candidate for abstract moral truth.
On your last point, I’m fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which it’s preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/fundamental/principled levels. By contrast, I could imagine some more complex variable-value/threshold approach to lexicality turning out to have at least some credibility (even if I’m overall more inclined to think that the sorts of intuitions you’re drawing upon are better captured at the “instrumental heuristic” level).
On moral uncertainty: I agree that bargaining-style approaches seem better than “maximizing expected choiceworthiness” approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isn’t true that “orthodox utilitarianism also endorses this in principle”, because a key part of the case description was “no matter what else happens to anyone else”. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. It’s strictly anti-absolutist in this sense, and I think that’s a theoretically plausible and desirable property that your view is missing.
Another way to flip the ‘force’ issue would be “suppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesn’t care about their own existence or have valenced experience)?”
I don’t think it’s helpful to focus on external agents imposing their will on others, because that’s going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/or comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didn’t want it, but wanting it doesn’t make it good.)
Thanks!
Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just don’t think it is overwhelming evidence against the truth of the view.
Perhaps a reframing is “would this still seem like a ~reductio conditional on a long reflection type scenario that results in literally everyone agreeing that it’s desirable/good?”
And I don’t mean this in the sense of just “assume that the conclusion is ground truth”—I mean it in the sense of “does this look as bad when it doesn’t involve anyone doing anything involuntary?” to try to tease apart whether intuitions around annihilation per se are to any extent “just” a proxy for guarding against the use of force/coercion/lack of consent.
Another way to flip the ‘force’ issue would be “suppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesn’t care about their own existence or have valenced experience)?”
I’ll note that I can easily dream up scenarios where we should force people, even a whole society, to do something against its will. I know some will disagree, but I think we should (at least in principle, implementation is messy) forcibly prevent people from totally voluntarily being tortured (assume away masochism—like suppose the person just has a preference for suffering that results in pure suffering with no ‘secretly liking it’ along for the ride)
This one I more eagerly bite the bullet on, it just straightforwardly seems true to me that this is possible in principle (i.e., such a world could/would be genuinely very bad). And relevantly, orthodox utilitarianism also endorses this in principle, some of the time (i.e. just add up the utils, in principle one suffering monster can have enough negative utility)!
I don’t have an answer to this (yet) because my sense is just that figuring out how to make overall assessments of probability distributions on various moral views is just extremely hard in general and not “solved”.
This actually reminds me of a shortform post I wrote a while back. Let me just drop a screenshot to make my life a bit easier in terms of formatting nested quotes:
I think this^ brief discussion of how the two sides might look at the same issue gets at the fundamental problem/non-obviousness of the matter pretty well.
I think we may just have very different background stances on like how to do ethics. I think that we should more strongly decouple the project of abstract object level truth seeking from the project of figuring out a code of norms/rules/de facto ethics that satisfies all our many constraints and preferences today. The thing you propose seems promising to me as like a coalitional bargaining proposal for guiding action in the near future, but not especially promising as a candidate for abstract moral truth.
Thanks for your reply! Working backwards...
On your last point, I’m fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which it’s preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/fundamental/principled levels. By contrast, I could imagine some more complex variable-value/threshold approach to lexicality turning out to have at least some credibility (even if I’m overall more inclined to think that the sorts of intuitions you’re drawing upon are better captured at the “instrumental heuristic” level).
On moral uncertainty: I agree that bargaining-style approaches seem better than “maximizing expected choiceworthiness” approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isn’t true that “orthodox utilitarianism also endorses this in principle”, because a key part of the case description was “no matter what else happens to anyone else”. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. It’s strictly anti-absolutist in this sense, and I think that’s a theoretically plausible and desirable property that your view is missing.
I don’t think it’s helpful to focus on external agents imposing their will on others, because that’s going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/or comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didn’t want it, but wanting it doesn’t make it good.)