On your last point, Iâm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which itâs preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/âfundamental/âprincipled levels. By contrast, I could imagine some more complex variable-value/âthreshold approach to lexicality turning out to have at least some credibility (even if Iâm overall more inclined to think that the sorts of intuitions youâre drawing upon are better captured at the âinstrumental heuristicâ level).
On moral uncertainty: I agree that bargaining-style approaches seem better than âmaximizing expected choiceworthinessâ approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isnât true that âorthodox utilitarianism also endorses this in principleâ, because a key part of the case description was âno matter what else happens to anyone elseâ. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. Itâs strictly anti-absolutist in this sense, and I think thatâs a theoretically plausible and desirable property that your view is missing.
Another way to flip the âforceâ issue would be âsuppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesnât care about their own existence or have valenced experience)?â
I donât think itâs helpful to focus on external agents imposing their will on others, because thatâs going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/âor comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didnât want it, but wanting it doesnât make it good.)
Thanks for your reply! Working backwards...
On your last point, Iâm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which itâs preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/âfundamental/âprincipled levels. By contrast, I could imagine some more complex variable-value/âthreshold approach to lexicality turning out to have at least some credibility (even if Iâm overall more inclined to think that the sorts of intuitions youâre drawing upon are better captured at the âinstrumental heuristicâ level).
On moral uncertainty: I agree that bargaining-style approaches seem better than âmaximizing expected choiceworthinessâ approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.
Re: uncompensable monster: It isnât true that âorthodox utilitarianism also endorses this in principleâ, because a key part of the case description was âno matter what else happens to anyone elseâ. Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. Itâs strictly anti-absolutist in this sense, and I think thatâs a theoretically plausible and desirable property that your view is missing.
I donât think itâs helpful to focus on external agents imposing their will on others, because thatâs going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/âor comparisons of alternative possible societal preferences. Here are some possible futures:
(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.
(A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).
(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.
(B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
(B2): The broadly-utopian (but imperfect) future unfolds as anticipated.
Intuitively: B2 > A2 > A1 > B1.
I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.
(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didnât want it, but wanting it doesnât make it good.)