Extinction is probably only 10^10 times worse than one random death
Many (e.g. Parfit, Bostrom, and MacAskill) have convincingly argued that we should care for future people (longtermism) and thus extinction is as bad as the loss of 10^35 lives or possibly much more bc there might be 10^35 humans yet to be born.
I believe with medium confidence, these numbers are far too high and that when fertility patterns are fully accounted for, 10^35 might become 10^10—approximately the current human population. I believe with much stronger confidence that EAs should be explicit about the assumptions underlying numbers like 10^35 because concern for future people is necessary but insufficient for such claims.
I first defend these claims before offering some ancillary thoughts about implications of longtermism EAs should take more seriously.
Extinction isn’t that much worse than 1 death
The main point is that if you kill a random person, you kill off the rest of the rest of their descendants too. And since the average person is responsible for ~10^35/(current human population) of the future lives, their death is ~10^10 times less bad than extinction.
The general response to this is a form of Malthusianism—that after a death, human population regains its level since fertility increases. Given that current fertility rates are below 2 in much of the developed world, I have low confidence this claim is true. More importantly, you need high credence in a type of Malthusianism to bump up the 10^10 number significantly. If Malthusianism is 99% likely to be correct, extinction is only 10^12 times worse than one death—if X is harm of extinction and X is arbitrarily large: there is a 99% chance you can treat one death as infinitely less bad as extinction but a 1% chance it’s 10^10 times worse and 0.99(0 * X) + 0.01(1/10^10 * X) = 1/10^12 * X.
There are many other claims one could make regarding the above. Some popular ones include digital people, simulated lives, and artificial uteruses. I don’t have developed thoughts on how these technologies interact with fertility rates. The same point about needing high credence from above does apply though. And more importantly, if any of these or other claims are the lynchpin for arguments about why extinction should be a main priority, EAs should make the point more explicitly because none of these claims is that obvious. Even Malthusianism type claims should be made more explicit.
Finally, I think arguments for why extinction might be less than 10^10 times worse are often ignored. I’ll point out two. First, it seems that people can have large positive externalities on others’s lives and also future people’s lives by sharing ideas; less people means the externality from each life is less. Second, insecurity that might result from seeing another’s death might lower fertility and thus lower future lives.
Other Implications of longtermism
I’d like to end by zooming out on longtermism as a whole. The idea that future people matter is a powerful claim and opens a deep rabbit hole. In my view, EAs have found the first exit out of the rabbit hole—that extinction might be really bad—and left even more unintuitive implications buried below.
A few of these:
Fertility might be an important cause area. If you can raise the fertility rate by 1% for one generation, you increase total future population by 1%, if you assume away Malthusianism and similar claims. If you can affect a longterm shift in fertility rates (for example, through genetic editing), you could do much, much better— 100% x [1.01^n − 1] times better, where n is the number of future generations, which is a very large number.
Maybe we should prioritize young lives over older lives. Under longtermism, the main value most people have is their progeny. If there are 10^35 more people left to live, saving the life of someone who will have kids is > 10^25 times more valuable than saving the life of someone who won’t.
Abortion might be a great evil. See 1…no matter your view on whether an unborn baby is a life, banning abortion could easily affect a significant and longterm increase in the fertility rate.
I think your calculations must be wrong somewhere, although I can’t quite follow them well enough to see exactly where.
If you have a 10% credence in Malthusianism, then the expected badness of extinction is 0.1*10^35, or whatever value you think a big future is. That’s still a lot closer to 10^35 times the badness of one death than 10^10 times.
Does that seem right?
No, because you have to compare the two harms.
Take the number of future lives as N and current population as C
Extinction is as bad as N lives lost.
One death is w/ 10% credence only approx as bad as 1 death bc Malthusianism. But w/ 90% credence, it is as bad as N/C lives lost.
So, plugging in 10^35 as N and 10^10 as C, EV of one death is 1 (.1) + N/C (0.9) ~ N/C * 0.9 ~ 9e24, 11 times worse than extinction.
In general, if you have credence p, extinction becomes 10^10*1/(1-p) worse than one death.
Ah nice, thanks for explaining! I’m not following all the calculations still, but that’s on me, and I think they’re probably right.
But I don’t think your argument is actually that relevant to what we should do, even if it’s right. That’s because we don’t care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes.
Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.
In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That’s compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the “save half the people option”.
But the expected fraction of people saved by the options is quite different. The “save half” option always results in half being saved. And the expected value of the “save one” option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it’s basically irrelevant—expected value is the relevant thing.
What do you think? I might well have made a mistake, or misunderstood still.
Hmm, I’m not sure Iunderstand your point so maybe let me add some more numbers to what I’m saying and you could say if you think your point is responsive?
What I think you’re saying is that I’m estimating E[value saving one life / value stopping extinction] rather than E[value of saving one life] / E[value of stopping extinction]. I think this is wrong and that I’m doing the latter.
I start from the premise of we want to save in expectation most lives (current and future are equivalent). Let’s say I have two options…I can prevent extinction or directly stop a random living person from dying. Assume there are 10^35 (I just want N >> C) future lives and there are 10^10 current lives. Now assume I believe there is a 99% chance that when I save this one life, fertility in the future somehow goes up such that the individual’s progeny are replaced, but there’s a 1% chance the individual’s progeny is not replaced. The individual is responsible for 10^35/10^10 =10^25 progeny. This gives E[stopping random living person from dying] ~ 1%*10^25 =10^23.
And we’d agree E[preventing extinction] = 10^35. So E[value of saving one life] / E[value of stopping extinction] ~ 10^-12.
Interestingly E[value of saving one life / value of stopping extinction] is the same in this case because the denominator is just a constant random variable…though E[value of stopping extinction/value of saving one life] is very very large (much larger than 10^12).
Thanks, this back and forth is very helpful. I think I’ve got a clearer idea about what you’re saying.
I think I disagree that it’s reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I’d expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.
So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.
E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it’s false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.
E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it’s false. But you don’t need that high a credence to get an EV around 10^35.
So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there’s a somewhat bigger gap than “just” 10^10. I’m not sure how much bigger.
What do you think about that?
Edit: I misread and thought you were saying non-Malthusian worlds had more lives at first; realized you said the opposite, so we’re saying the same thing and we agree. Will have to do more math about this.
This is an interesting point that I hadn’t considered! I think you’re right that non-Malthusian futures are much larger than Malthusian futures in some cases...though if i.e. the “Malthusian” constraint is digital lives or such, not sure.
I think the argument you make actually cuts the other way. That to go back to the expected value...the case where the single death is deriving its EV from is precisely the non-Malthusian scenarios (when its progeny is not replaced by future progeny) so its EV actually remains the same. The extinction EV is the one that reduces...so you’ll actually get a number much less than 10^10 if you have high credence that Malthusianism is true and think Malthusian worlds have more people.
But, if you believe the opposite...that Malthusian worlds have more people, which I have not thought about but actually think might be true, yes a bigger gap than 10^10; will have to think about this.
Thanks! Does this make sense to you?
We’ve talked about this, but I wanted to include my two counterarguments as a comment to this post:
It seems like there’s a good likelihood that we have semi-mathusian constraints nowadays. While I would admit that one should be skeptical of total malthusianism (ie for every person dying another one lives because we are at max carrying capacity), I think it is much more reasonable to think that carrying constraints actually do exist and maybe its something like for every death you get .2 lives or something. If this is true, I think this argument weakens a bunch.
This argument only works if, conditional on existential risk not happening, we don’t hit malthusian constraints at any point in the future, which seems quite implausible. If we don’t get existential risk and the pie just keeps growing, it seems like we would just get super-abundance and the only thing holding people back would be malthusian physical constraints on creating happy people. Therefore, we just need some people to live past that time of super-abundance to have massive growth. Additionally, even if you think those people wouldn’t have kids (which I find pretty implausible—as one person’s preference for children would lead to many kids given abundance), you could talk about those lives being extremely happy which holds most of the weight. This also
Side note: this argument seems to rely on some ideas about astronomical waste that I won’t discuss here (I also haven’t done so much thinking on the topic), but it seems maybe worth it to frame around that debate.