I wouldn’t be surprised if Bostrom’s basic thinking is that suffering animals just aren’t a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they’ll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction.
We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn’t. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium.
(Minor side-comment: ‘humans survive and eat lots of suffering animals forever’ is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)
Good points about fuel efficiency. I don’t think it’s likely that (post)humans will rely on factory farmed animals as a food source. However, there are other ways that space colonization or AI could cause a lot of suffering, such as spreading wild animals (which quite possibly have negative lives) via terraforming or running a lot of computer simulations containing suffering (see also: mindcrime). Since most people value nature and don’t see wildlife suffering as a problem, I’m not very optimistic that future humans, or for that matter an AI based on human values, will care about it. See this analysis by Michael Dickens.
(It seems like “existential risk” used to be a broader term, but now I always see it used as a synonym for human extinction risks.)
I agree with the “throwaway” comment. I’m not aware of anyone who expects factory farming of animals for meat to continue in a post-human future (except in ancestor simulations). The concerns are with other possible sources of suffering.
I wouldn’t be surprised if Bostrom’s basic thinking is that suffering animals just aren’t a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they’ll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction.
We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn’t. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium.
(Minor side-comment: ‘humans survive and eat lots of suffering animals forever’ is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)
Good points about fuel efficiency. I don’t think it’s likely that (post)humans will rely on factory farmed animals as a food source. However, there are other ways that space colonization or AI could cause a lot of suffering, such as spreading wild animals (which quite possibly have negative lives) via terraforming or running a lot of computer simulations containing suffering (see also: mindcrime). Since most people value nature and don’t see wildlife suffering as a problem, I’m not very optimistic that future humans, or for that matter an AI based on human values, will care about it. See this analysis by Michael Dickens.
(It seems like “existential risk” used to be a broader term, but now I always see it used as a synonym for human extinction risks.)
I agree with the “throwaway” comment. I’m not aware of anyone who expects factory farming of animals for meat to continue in a post-human future (except in ancestor simulations). The concerns are with other possible sources of suffering.