Ah nice, thanks for explaining! I’m not following all the calculations still, but that’s on me, and I think they’re probably right.
But I don’t think your argument is actually that relevant to what we should do, even if it’s right. That’s because we don’t care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes.
Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.
In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That’s compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the “save half the people option”.
But the expected fraction of people saved by the options is quite different. The “save half” option always results in half being saved. And the expected value of the “save one” option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it’s basically irrelevant—expected value is the relevant thing.
What do you think? I might well have made a mistake, or misunderstood still.
Hmm, I’m not sure Iunderstand your point so maybe let me add some more numbers to what I’m saying and you could say if you think your point is responsive?
What I think you’re saying is that I’m estimating E[value saving one life / value stopping extinction] rather than E[value of saving one life] / E[value of stopping extinction]. I think this is wrong and that I’m doing the latter.
I start from the premise of we want to save in expectation most lives (current and future are equivalent). Let’s say I have two options…I can prevent extinction or directly stop a random living person from dying. Assume there are 10^35 (I just want N >> C) future lives and there are 10^10 current lives. Now assume I believe there is a 99% chance that when I save this one life, fertility in the future somehow goes up such that the individual’s progeny are replaced, but there’s a 1% chance the individual’s progeny is not replaced. The individual is responsible for 10^35/10^10 =10^25 progeny. This gives E[stopping random living person from dying] ~ 1%*10^25 =10^23.
And we’d agree E[preventing extinction] = 10^35. So E[value of saving one life] / E[value of stopping extinction] ~ 10^-12.
Interestingly E[value of saving one life / value of stopping extinction] is the same in this case because the denominator is just a constant random variable…though E[value of stopping extinction/value of saving one life] is very very large (much larger than 10^12).
Thanks, this back and forth is very helpful. I think I’ve got a clearer idea about what you’re saying.
I think I disagree that it’s reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I’d expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.
So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.
E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it’s false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.
E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it’s false. But you don’t need that high a credence to get an EV around 10^35.
So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there’s a somewhat bigger gap than “just” 10^10. I’m not sure how much bigger.
Edit: I misread and thought you were saying non-Malthusian worlds had more lives at first; realized you said the opposite, so we’re saying the same thing and we agree. Will have to do more math about this.
This is an interesting point that I hadn’t considered! I think you’re right that non-Malthusian futures are much larger than Malthusian futures in some cases...though if i.e. the “Malthusian” constraint is digital lives or such, not sure.
I think the argument you make actually cuts the other way. That to go back to the expected value...the case where the single death is deriving its EV from is precisely the non-Malthusian scenarios (when its progeny is not replaced by future progeny) so its EV actually remains the same. The extinction EV is the one that reduces...so you’ll actually get a number much less than 10^10 if you have high credence that Malthusianism is true and think Malthusian worlds have more people.
But, if you believe the opposite...that Malthusian worlds have more people, which I have not thought about but actually think might be true, yes a bigger gap than 10^10; will have to think about this.
No, because you have to compare the two harms.
Take the number of future lives as N and current population as C
Extinction is as bad as N lives lost.
One death is w/ 10% credence only approx as bad as 1 death bc Malthusianism. But w/ 90% credence, it is as bad as N/C lives lost.
So, plugging in 10^35 as N and 10^10 as C, EV of one death is 1 (.1) + N/C (0.9) ~ N/C * 0.9 ~ 9e24, 11 times worse than extinction.
In general, if you have credence p, extinction becomes 10^10*1/(1-p) worse than one death.
Ah nice, thanks for explaining! I’m not following all the calculations still, but that’s on me, and I think they’re probably right.
But I don’t think your argument is actually that relevant to what we should do, even if it’s right. That’s because we don’t care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes.
Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.
In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That’s compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the “save half the people option”.
But the expected fraction of people saved by the options is quite different. The “save half” option always results in half being saved. And the expected value of the “save one” option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it’s basically irrelevant—expected value is the relevant thing.
What do you think? I might well have made a mistake, or misunderstood still.
Hmm, I’m not sure Iunderstand your point so maybe let me add some more numbers to what I’m saying and you could say if you think your point is responsive?
What I think you’re saying is that I’m estimating E[value saving one life / value stopping extinction] rather than E[value of saving one life] / E[value of stopping extinction]. I think this is wrong and that I’m doing the latter.
I start from the premise of we want to save in expectation most lives (current and future are equivalent). Let’s say I have two options…I can prevent extinction or directly stop a random living person from dying. Assume there are 10^35 (I just want N >> C) future lives and there are 10^10 current lives. Now assume I believe there is a 99% chance that when I save this one life, fertility in the future somehow goes up such that the individual’s progeny are replaced, but there’s a 1% chance the individual’s progeny is not replaced. The individual is responsible for 10^35/10^10 =10^25 progeny. This gives E[stopping random living person from dying] ~ 1%*10^25 =10^23.
And we’d agree E[preventing extinction] = 10^35. So E[value of saving one life] / E[value of stopping extinction] ~ 10^-12.
Interestingly E[value of saving one life / value of stopping extinction] is the same in this case because the denominator is just a constant random variable…though E[value of stopping extinction/value of saving one life] is very very large (much larger than 10^12).
Thanks, this back and forth is very helpful. I think I’ve got a clearer idea about what you’re saying.
I think I disagree that it’s reasonable to assume that there will be a fixed N = 10^35 future lives, regardless of whether it ends up Malthusian. If it ends up not Malthusian, I think I’d expect the number of people in the future to be far less than whatever the max imposed by resource constraints is, ie much less than 10^35.
So I think that changes the calculation of E[saving one life], without much changing E[preventing extinction], because you need to split out the cases where Malthusianism is true vs false.
E[saving one life] is 1 if Malthusianism is true, or something fraction of the future if Malthusianism is false, but if it’s false, then we should expect the future to be much smaller than 10^35. So the EV will be much less than 10^35.
E[preventing extinction] is 10^35 if Malthusianism is true, and much less if it’s false. But you don’t need that high a credence to get an EV around 10^35.
So I guess all that to say that I think your argument is right and also action relevant, except I think the future is much smaller in non-Malthusian worlds, so there’s a somewhat bigger gap than “just” 10^10. I’m not sure how much bigger.
What do you think about that?
Edit: I misread and thought you were saying non-Malthusian worlds had more lives at first; realized you said the opposite, so we’re saying the same thing and we agree. Will have to do more math about this.
This is an interesting point that I hadn’t considered! I think you’re right that non-Malthusian futures are much larger than Malthusian futures in some cases...though if i.e. the “Malthusian” constraint is digital lives or such, not sure.
I think the argument you make actually cuts the other way. That to go back to the expected value...the case where the single death is deriving its EV from is precisely the non-Malthusian scenarios (when its progeny is not replaced by future progeny) so its EV actually remains the same. The extinction EV is the one that reduces...so you’ll actually get a number much less than 10^10 if you have high credence that Malthusianism is true and think Malthusian worlds have more people.
But, if you believe the opposite...that Malthusian worlds have more people, which I have not thought about but actually think might be true, yes a bigger gap than 10^10; will have to think about this.
Thanks! Does this make sense to you?