By “doesn’t have to work on reducing x-risk”, do you mean that they don’t want to?
I’d expect that negative utilitarians (NUs) do want to reduce x-risk, because
(1) x-risk is rarely an either/or risk of 100% extinction; instead, more x-risk probably correlates with more risk of extreme suffering (from non-total pandemics/disasters/wars/etc., and all of their after-effects)
(2) even facing a 100% human extinction, we’d want to account for our epistemic uncertainty of the conditions from which suffering can evolve (re-evolve on Earth, or be found elsewhere within the reach of our descendants)
NUs don’t necessarily jump to suicide as a solution, because helping others is an infinite game to live for, especially after accounting for the epistemic uncertainty of all possible forms of suffering and their evolution. There is further basic research on suffering to be done before turning off the lights and hoping that all the billions of exoplanets would have their own guardians.
It is a straw man argument that NUs don’t value life or positive states, because NUs value them instrumentally, which may translate into substantial practical efforts (compared even with someone who claims to be terminally motivated by them).
I mean that the end of the world isn’t a bad outcome to someone who only values the absence of suffering, and who is perfectly indifferent between all ‘positive’ states. (This is Ord’s definition of absolute NU, so I don’t think I’m straw-manning that kind.) And if something isn’t bad (and doesn’t prevent any good), a utilitarian ‘doesn’t have to work on it’ in the sense that there’s no moral imperative to.
(1) That makes sense. But there’s an escalation problem: worse risk is better to ANU (see below).
(2) One dreadful idea is that self-replicators would do the anti-suffering work, obviating the need for sentient guardians, but I see what you’re saying. Again though, this uncertainty about moral patients licences ANU work on x-risks to humans… but only while moving the degenerate ‘solution’ upward, to valuing risks that destroy more classes of candidate moral patients. At the limit, the end of the entire universe is indisputably optimal to an ANU. So you’re right about Earth x-risks (which is mostly all people talk about) but not for really farout scifi ones, which ANU seems to value.
Actually this degenerate motion might change matters practically: it seems improbable that it’d be harder to remove suffering with biotechnology than to destroy everything. Up to you if you’re willing to bite the bullet on the remaining theoretical repugnance.
(To clarify, I think basically no negative utilitarian wants this, including those who identify with absolute NU. But that suggests that their utility function is more complex than they let on. You hint at this when you mention valuing an ‘infinite game’ of suffering alleviation. This doesn’t make sense on the ANU account, because each iteration can only break even (not increase suffering) or lose (increase suffering).)
Most ethical views have degenerate points in them, but valuing the greatest destruction equal to the greatest hedonic triumph is unusually repugnant, even among repugnant conclusions.
I don’t think instrumentally valuing positive states helps with the x-risk question, because they get trumped by a sufficiently large amount of terminal value, again e.g. the end of all things.
By “doesn’t have to work on reducing x-risk”, do you mean that they don’t want to?
I’d expect that negative utilitarians (NUs) do want to reduce x-risk, because
(1) x-risk is rarely an either/or risk of 100% extinction; instead, more x-risk probably correlates with more risk of extreme suffering (from non-total pandemics/disasters/wars/etc., and all of their after-effects)
(2) even facing a 100% human extinction, we’d want to account for our epistemic uncertainty of the conditions from which suffering can evolve (re-evolve on Earth, or be found elsewhere within the reach of our descendants)
NUs don’t necessarily jump to suicide as a solution, because helping others is an infinite game to live for, especially after accounting for the epistemic uncertainty of all possible forms of suffering and their evolution. There is further basic research on suffering to be done before turning off the lights and hoping that all the billions of exoplanets would have their own guardians.
It is a straw man argument that NUs don’t value life or positive states, because NUs value them instrumentally, which may translate into substantial practical efforts (compared even with someone who claims to be terminally motivated by them).
I mean that the end of the world isn’t a bad outcome to someone who only values the absence of suffering, and who is perfectly indifferent between all ‘positive’ states. (This is Ord’s definition of absolute NU, so I don’t think I’m straw-manning that kind.) And if something isn’t bad (and doesn’t prevent any good), a utilitarian ‘doesn’t have to work on it’ in the sense that there’s no moral imperative to.
(1) That makes sense. But there’s an escalation problem: worse risk is better to ANU (see below).
(2) One dreadful idea is that self-replicators would do the anti-suffering work, obviating the need for sentient guardians, but I see what you’re saying. Again though, this uncertainty about moral patients licences ANU work on x-risks to humans… but only while moving the degenerate ‘solution’ upward, to valuing risks that destroy more classes of candidate moral patients. At the limit, the end of the entire universe is indisputably optimal to an ANU. So you’re right about Earth x-risks (which is mostly all people talk about) but not for really farout scifi ones, which ANU seems to value.
Actually this degenerate motion might change matters practically: it seems improbable that it’d be harder to remove suffering with biotechnology than to destroy everything. Up to you if you’re willing to bite the bullet on the remaining theoretical repugnance.
(To clarify, I think basically no negative utilitarian wants this, including those who identify with absolute NU. But that suggests that their utility function is more complex than they let on. You hint at this when you mention valuing an ‘infinite game’ of suffering alleviation. This doesn’t make sense on the ANU account, because each iteration can only break even (not increase suffering) or lose (increase suffering).)
Most ethical views have degenerate points in them, but valuing the greatest destruction equal to the greatest hedonic triumph is unusually repugnant, even among repugnant conclusions.
I don’t think instrumentally valuing positive states helps with the x-risk question, because they get trumped by a sufficiently large amount of terminal value, again e.g. the end of all things.
(I’m not making claims about other kinds of NU.)