(Many others should also hope humanity doesn’t go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)
I worry about this line of reasoning because it’s ends-justify-the-means thinking.
Let’s say billions of people were being tortured right now, and some longtermists wrote about how this isn’t even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian’s articles about how SBF’s naive utilitarianism is alive and well in EA.
The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn’t a difference in the actual badness.
Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let’s say we ignore factory farming, and then there’s a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I’m not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.
I’m skeptical that humans will ever realize the full cosmic endowment, and that even if we do, the future will be positive for most of the quintillions of beings involved.
First, as this video discusses, it may be difficult to spread beyond our own star system, because habitable planets may be few and far between. The prospect of finding a few habitable planets might not justify the expense of sending generation ships (even ones populated with digital minds) out into deep space to search for them. And since Earth will remain habitable for the next billion years, there isn’t much incentive to leave now. Granted we could set up permanent space habitats in the solar system and deep space instead of looking for planets to set up shop on, but… what’s the point?
Second, even if we do spread into interstellar space, there’s no guarantee that all of the settlements we set up will be great. Humans could bring factory farming practices to space with them. Societies in outer space could be oppressive and violent towards humans as well as towards sentient aliens. And the process of settling other planets could damage ecosystems already present there, which could cause any sentient beings on those worlds to suffer.
I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:
If extinction risk is reduced in absolute terms by 10^-10, and the value of the future is 10^50 lives, then one would save 10^40 (= 10^(50 − 10)) lives.
However, animal welfare or global health and development interventions have an astronomically low impact compared with the above.
I do not think the above comparison makes sense because it relies on 2 different methodologies. The way they are constructed, the 2nd will always have an impact for life-saving interventions which is limited to the global population of around 10^10, so it is bound to result in a lower impact than the 1st even if it is describing the exact same intervention. Interventions which aim to decrease the probability of a given population loss[2] achieve this via saving lives, so one could weight lives saved at lower population sizes more heavily, but still estimate their cost-effectiveness in terms of lives saved per $. I tried this, and with my assumptions interventions to save lives in normal times look more cost-effective than ones which save lives in severe catastrophes.
Less theoretically, decreasing measurable (nearterm) suffering (e.g. as assessed in standard cost-benefit analyses with estimates in DALY/$) has been a great heuristic to improve the welfare of the beings whose welfare is being considered both nearterm and longterm[3]. So I think it makes sense to a priori expect interventions which very cost-effectively decrease measurable suffering to be great from a longtermist perspective too.
Utilitarians aware of the cosmic endowment, at least, can take comfort in the fact that the prospect of quadrillions of animals suffering isn’t even a feather in the scales. They shut up and multiply.
(Many others should also hope humanity doesn’t go extinct soon, for various moral and empirical reasons. But the above point is often missed among people I know.)
I worry about this line of reasoning because it’s ends-justify-the-means thinking.
Let’s say billions of people were being tortured right now, and some longtermists wrote about how this isn’t even a feather in the scales compared to the cosmic endowment. These longtermists would be accused of callously gambling billions of years on suffering on a theoretical idea. I can just imagine The Guardian’s articles about how SBF’s naive utilitarianism is alive and well in EA.
The difference between the scenario for animals and the scenario for humans is that the former is socially acceptable but the latter is not. There isn’t a difference in the actual badness.
Separately, to engage with the utilitarian merits of your argument, my main skepticism is an unwillingness to go all-in on ideas which remain theoretical when the stakes are billions of years of torture. (For example, let’s say we ignore factory farming, and then there’s a still unknown consideration which prevents us or anyone else from accessing the cosmic endowment. That scares me.) Also, though I’m not a negative utilitarian, I think I take arguments for suffering-focused views more seriously than you might.
I’m skeptical that humans will ever realize the full cosmic endowment, and that even if we do, the future will be positive for most of the quintillions of beings involved.
First, as this video discusses, it may be difficult to spread beyond our own star system, because habitable planets may be few and far between. The prospect of finding a few habitable planets might not justify the expense of sending generation ships (even ones populated with digital minds) out into deep space to search for them. And since Earth will remain habitable for the next billion years, there isn’t much incentive to leave now. Granted we could set up permanent space habitats in the solar system and deep space instead of looking for planets to set up shop on, but… what’s the point?
Second, even if we do spread into interstellar space, there’s no guarantee that all of the settlements we set up will be great. Humans could bring factory farming practices to space with them. Societies in outer space could be oppressive and violent towards humans as well as towards sentient aliens. And the process of settling other planets could damage ecosystems already present there, which could cause any sentient beings on those worlds to suffer.
Thanks for the comment, Zach. I upvoted it.
I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:
If extinction risk is reduced in absolute terms by 10^-10, and the value of the future is 10^50 lives, then one would save 10^40 (= 10^(50 − 10)) lives.
However, animal welfare or global health and development interventions have an astronomically low impact compared with the above.
I do not think the above comparison makes sense because it relies on 2 different methodologies. The way they are constructed, the 2nd will always have an impact for life-saving interventions which is limited to the global population of around 10^10, so it is bound to result in a lower impact than the 1st even if it is describing the exact same intervention. Interventions which aim to decrease the probability of a given population loss[2] achieve this via saving lives, so one could weight lives saved at lower population sizes more heavily, but still estimate their cost-effectiveness in terms of lives saved per $. I tried this, and with my assumptions interventions to save lives in normal times look more cost-effective than ones which save lives in severe catastrophes.
Less theoretically, decreasing measurable (nearterm) suffering (e.g. as assessed in standard cost-benefit analyses with estimates in DALY/$) has been a great heuristic to improve the welfare of the beings whose welfare is being considered both nearterm and longterm[3]. So I think it makes sense to a priori expect interventions which very cost-effectively decrease measurable suffering to be great from a longtermist perspective too.
In principle, I am very happy to say that a 10^-100 chance of saving 10^100 lives is exactly as valuable as a 100 % chance of saving 1 life.
For example, decresing the probability of population dropping below 1 k for extinction, or dropping below 1 billion for global catastrophic risk.
Animal suffering has been increasing, but animals have been neglected. There are efforts to account for animals in cost-benefit analyses.