It’s possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.
You could also reject maximizing expected utility as the proper method of practical reasoning. Weird things happen with subjective expected utility theory, after all—St. Petersburg paradox, Pascal’s Mugging, anything with infinity, dependence on possibly meaningless subjective probabilities, etc. Of course, giving to poverty charities might still be suboptimal under your preferred decision theory.
FWIW, strict utilitarianism isn’t concerned with “selfishness” or “moral narcisissm”, just maximizing utility.
It’s possible that preventing human extinction is net negative
For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it’s often taken for granted that x-risk reduction is net positive. I’d like to see more analysis on whether non-negative utilitarians should support x-risk reduction.
I totally agree. I’ve had several in-person discussions about the expected sign of x-risk reduction, but hardly anybody writes about it publicly in a useful way. The people I’ve spoken to in person all had similar perspectives and I expect that we’re still missing a lot of important considerations.
I believe we don’t see much discussion of this sort because you have to accept a few uncommon (but true) beliefs before this question becomes interesting. If you don’t seriously care about non-human animals (which is a pretty intellectually crappy position but still popular even among EAs) then reducing x-risk is pretty clearly net positive, and if you think x-risk is silly or doesn’t matter (which is another obviously wrong but still popular position) then you don’t care about this question. Not that many people accept both that animals matter and that x-risk matters, and even among people who do accept those, some believe that work on x-risk is futile or that we should focus on other things. So you end up with a fairly small pool of people who care at all about the question of whether x-risk reduction is net positive.
It’s also possible that people don’t even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.
For the record, I’ve thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).
If everyone has similar perspectives, it could be a sign that we’re on the right track, but it could be that we’re missing some important considerations as you say, which is why I also think more discussion of this would be useful.
I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn’t win (and probably even if it does) I’ll share it here.
I think it’s a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I’m not sure that’s totally obvious. When you throw wild animals and digital systems into the mix, things get scary.
I wouldn’t be surprised if Bostrom’s basic thinking is that suffering animals just aren’t a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they’ll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction.
We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn’t. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium.
(Minor side-comment: ‘humans survive and eat lots of suffering animals forever’ is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)
Good points about fuel efficiency. I don’t think it’s likely that (post)humans will rely on factory farmed animals as a food source. However, there are other ways that space colonization or AI could cause a lot of suffering, such as spreading wild animals (which quite possibly have negative lives) via terraforming or running a lot of computer simulations containing suffering (see also: mindcrime). Since most people value nature and don’t see wildlife suffering as a problem, I’m not very optimistic that future humans, or for that matter an AI based on human values, will care about it. See this analysis by Michael Dickens.
(It seems like “existential risk” used to be a broader term, but now I always see it used as a synonym for human extinction risks.)
I agree with the “throwaway” comment. I’m not aware of anyone who expects factory farming of animals for meat to continue in a post-human future (except in ancestor simulations). The concerns are with other possible sources of suffering.
Thanks Jesse, I definitely should also have said that I’m assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I’m not sure how likely each scenario is, or what the ‘expected value’ of the future is.
Also agreed that utilitarianism isn’t concerned with selfishness, but from an individual’s perspective, I’m wondering if what Alex is doing in this case might be classed that way.
It’s possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.
This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I’m sure there’s a lot more literature on this, that’s about all I’ve looked into it.
You could also reject maximizing expected utility as the proper method of practical reasoning. Weird things happen with subjective expected utility theory, after all—St. Petersburg paradox, Pascal’s Mugging, anything with infinity, dependence on possibly meaningless subjective probabilities, etc. Of course, giving to poverty charities might still be suboptimal under your preferred decision theory.
FWIW, strict utilitarianism isn’t concerned with “selfishness” or “moral narcisissm”, just maximizing utility.
For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it’s often taken for granted that x-risk reduction is net positive. I’d like to see more analysis on whether non-negative utilitarians should support x-risk reduction.
I totally agree. I’ve had several in-person discussions about the expected sign of x-risk reduction, but hardly anybody writes about it publicly in a useful way. The people I’ve spoken to in person all had similar perspectives and I expect that we’re still missing a lot of important considerations.
I believe we don’t see much discussion of this sort because you have to accept a few uncommon (but true) beliefs before this question becomes interesting. If you don’t seriously care about non-human animals (which is a pretty intellectually crappy position but still popular even among EAs) then reducing x-risk is pretty clearly net positive, and if you think x-risk is silly or doesn’t matter (which is another obviously wrong but still popular position) then you don’t care about this question. Not that many people accept both that animals matter and that x-risk matters, and even among people who do accept those, some believe that work on x-risk is futile or that we should focus on other things. So you end up with a fairly small pool of people who care at all about the question of whether x-risk reduction is net positive.
It’s also possible that people don’t even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.
For the record, I’ve thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).
If everyone has similar perspectives, it could be a sign that we’re on the right track, but it could be that we’re missing some important considerations as you say, which is why I also think more discussion of this would be useful.
I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn’t win (and probably even if it does) I’ll share it here.
I think it’s a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I’m not sure that’s totally obvious. When you throw wild animals and digital systems into the mix, things get scary.
I wouldn’t be surprised if Bostrom’s basic thinking is that suffering animals just aren’t a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they’ll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction.
We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn’t. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium.
(Minor side-comment: ‘humans survive and eat lots of suffering animals forever’ is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)
Good points about fuel efficiency. I don’t think it’s likely that (post)humans will rely on factory farmed animals as a food source. However, there are other ways that space colonization or AI could cause a lot of suffering, such as spreading wild animals (which quite possibly have negative lives) via terraforming or running a lot of computer simulations containing suffering (see also: mindcrime). Since most people value nature and don’t see wildlife suffering as a problem, I’m not very optimistic that future humans, or for that matter an AI based on human values, will care about it. See this analysis by Michael Dickens.
(It seems like “existential risk” used to be a broader term, but now I always see it used as a synonym for human extinction risks.)
I agree with the “throwaway” comment. I’m not aware of anyone who expects factory farming of animals for meat to continue in a post-human future (except in ancestor simulations). The concerns are with other possible sources of suffering.
Thanks Jesse, I definitely should also have said that I’m assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I’m not sure how likely each scenario is, or what the ‘expected value’ of the future is.
Also agreed that utilitarianism isn’t concerned with selfishness, but from an individual’s perspective, I’m wondering if what Alex is doing in this case might be classed that way.