Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.
You show that preventing (say) all potential wars next year with a death toll of 100 is 1000^1.6 = 63,000 times better in expectation than preventing all potential wars with a death toll of 100k.
More realistically, intervention A might decrease the probability of wars of magnitude 10-100 deaths and intervention B might decrease the probability of wars of magnitude 100,000 to 1,000,000 deaths. Suppose they decrease the probability of such wars over the next n years by the same amount. Which intervention is more valuable? We would use the same methodology as you did except we would use the CDF instead of the PDF. Intervention A would be only 1000^0.6 = 63 times as valuable.
As an intuition pump we might look at the distribution of military deaths in the 20th century. Should the League of Nations/UN have spent more effort preventing small wars and less effort preventing large ones?
The data actually makes me think that even the 63x from above is too high. I would say that in the 20th century, great-power conflict > interstate conflict > intrastate conflict should have been the order of priorities (if we wish to reduce military deaths). When it comes to things that could be even deadlier than WWII, like nuclear war or a pandemic, it’s obvious to me that the uncertainty about the death toll of such events increases at least linearly with the expected toll, and hence the “100-1000 vs 100k-1M” framing is superior to the PDF approach.
Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.
Technically speaking, the way I modelled the cost-effectiveness:
I am not comparing the cost-effectiveness of preventing events of different magnitudes.
Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.
Using the CDF makes sense for the former, but the PDF is adequate for the latter.
You show that preventing (say) all potential wars next year with a death toll of 100 is 1000^1.6 = 63,000 times better in expectation than preventing all potential wars with a death toll of 100k.
I agree the above follows from using my tail index of 1.6. It is just worth noting that the wars have to involve exactly, not at least, 100 and 100 k deaths for the above to be correct.
More realistically, intervention A might decrease the probability of wars of magnitude 10-100 deaths and intervention B might decrease the probability of wars of magnitude 100,000 to 1,000,000 deaths. Suppose they decrease the probability of such wars over the next n years by the same amount. Which intervention is more valuable? We would use the same methodology as you did except we would use the CDF instead of the PDF. Intervention A would be only 1000^0.6 = 63 times as valuable.
This is not quite correct. The expected deaths from wars with d1 to d2 deaths is ∫d2d1xαdmαxα+1dx=αdmα(m11−α−m21−α)α−1, where dmα are the minimum war deaths. So, for a tail index of α=1.6, intervention A would be 251 (= (10^-0.6 − 100^-0.6)/((10^5)^-0.6 - (10^6)^-0.6)) times as cost-effective as B. As the upper bounds of the severity ranges of A and B get increasingly close to their lower bounds, the cost-effectiveness of A tends to 63 k times that of B. In any case, the qualitative conclusion is the same. Preventing smaller wars averts more deaths in expectation assuming war deaths follow a power law.
As an intuition pump we might look at the distribution of military deaths in the 20th century. Should the League of Nations/UN have spent more effort preventing small wars and less effort preventing large ones?
I do not know. Instead of relying on past deaths alone, I would rather use cost-effectiveness analyses to figure out what is more cost-effective, as the Centre for Exploratory Altruism Research (CEARCH) does. I just think it is misleading to directly compare the scale of different events without accounting for their likelihood, as in the example from Founders Pledge’s report Philanthropy to the Right of Boom I mention in the post.
When it comes to things that could be even deadlier than WWII, like nuclear war or a pandemic, it’s obvious to me that the uncertainty about the death toll of such events increases at least linearly with the expected toll, and hence the “100-1000 vs 100k-1M” framing is superior to the PDF approach.
I am also quite uncertain about the death toll of catastrophic events! I used the PDF to remain consistent which Founders Pledge’s example, which compared discrete death tolls (not ranges).
Thanks for the detailed response, Vasco! Apologies in advance that this reply is slightly rushed and scattershot.
I agree that you are right with the maths—it is 251x, not 63,000x.
I am not comparing the cost-effectiveness of preventing events of different magnitudes.
Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.
OK, I did not really get this!
In your example on wars you say
As a consequence, if the goal is minimising war deaths[2], spending to save lives in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.
Can you give an example of what might count as “spending to save lives in wars 1k times as deadly” in this context?
I am guessing it is spending money now on things that would save lives in very deadly wars. Something like building a nuclear bunker vs making a bullet proof vest? Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?
When you are thinking about the PDF of PiPf, are you forgetting that ∇PiPf is not proportional to ∇Pf?
To give a toy example: suppose Pi=100.
Then if 90<pf<100 we have 1<PiPf<1.11
If 10<pf<20 we have 5<PiPf<10
The “height of the PDF graph” will not capture these differences in width. This won’t matter much for questions of 100 vs 100k deaths, but it might be relevant for near-existential mortality levels.
Can you give an example of what might count as “spending to save lives in wars 1k times as deadly” in this context?
For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.
Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?
You may be right! Interventions to decrease war deaths may be better conceptualised as preventing deaths within a given severity range, in which case I should not have interpreted lirerally the example in Founders Pledge’s report Philanthropy to the Right of Boom. In general, I think one has to rely on cost-effectiveness analyses to decide what to prioritise.
When you are thinking about the PDF of PiPf, are you forgetting that ∇PiPf is not proportional to ∇Pf?
I am not sure I got the question. In my discussion of Founders Pledge’s example about war deaths, I assumed the value of saving one life to be the same regardless of population size, because this is what they were doing). So I did not use the ratio between the initial and population.
Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.
You show that preventing (say) all potential wars next year with a death toll of 100 is 1000^1.6 = 63,000 times better in expectation than preventing all potential wars with a death toll of 100k.
More realistically, intervention A might decrease the probability of wars of magnitude 10-100 deaths and intervention B might decrease the probability of wars of magnitude 100,000 to 1,000,000 deaths. Suppose they decrease the probability of such wars over the next n years by the same amount. Which intervention is more valuable? We would use the same methodology as you did except we would use the CDF instead of the PDF. Intervention A would be only 1000^0.6 = 63 times as valuable.
As an intuition pump we might look at the distribution of military deaths in the 20th century. Should the League of Nations/UN have spent more effort preventing small wars and less effort preventing large ones?
The data actually makes me think that even the 63x from above is too high. I would say that in the 20th century, great-power conflict > interstate conflict > intrastate conflict should have been the order of priorities (if we wish to reduce military deaths). When it comes to things that could be even deadlier than WWII, like nuclear war or a pandemic, it’s obvious to me that the uncertainty about the death toll of such events increases at least linearly with the expected toll, and hence the “100-1000 vs 100k-1M” framing is superior to the PDF approach.
Thanks for the comment, Stan!
Technically speaking, the way I modelled the cost-effectiveness:
I am not comparing the cost-effectiveness of preventing events of different magnitudes.
Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.
Using the CDF makes sense for the former, but the PDF is adequate for the latter.
I agree the above follows from using my tail index of 1.6. It is just worth noting that the wars have to involve exactly, not at least, 100 and 100 k deaths for the above to be correct.
This is not quite correct. The expected deaths from wars with d1 to d2 deaths is ∫d2d1xαdmαxα+1dx=αdmα(m11−α−m21−α)α−1, where dmα are the minimum war deaths. So, for a tail index of α=1.6, intervention A would be 251 (= (10^-0.6 − 100^-0.6)/((10^5)^-0.6 - (10^6)^-0.6)) times as cost-effective as B. As the upper bounds of the severity ranges of A and B get increasingly close to their lower bounds, the cost-effectiveness of A tends to 63 k times that of B. In any case, the qualitative conclusion is the same. Preventing smaller wars averts more deaths in expectation assuming war deaths follow a power law.
I do not know. Instead of relying on past deaths alone, I would rather use cost-effectiveness analyses to figure out what is more cost-effective, as the Centre for Exploratory Altruism Research (CEARCH) does. I just think it is misleading to directly compare the scale of different events without accounting for their likelihood, as in the example from Founders Pledge’s report Philanthropy to the Right of Boom I mention in the post.
I am also quite uncertain about the death toll of catastrophic events! I used the PDF to remain consistent which Founders Pledge’s example, which compared discrete death tolls (not ranges).
Thanks for the detailed response, Vasco! Apologies in advance that this reply is slightly rushed and scattershot.
I agree that you are right with the maths—it is 251x, not 63,000x.
OK, I did not really get this!
In your example on wars you say
Can you give an example of what might count as “spending to save lives in wars 1k times as deadly” in this context?
I am guessing it is spending money now on things that would save lives in very deadly wars. Something like building a nuclear bunker vs making a bullet proof vest? Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?
When you are thinking about the PDF of PiPf, are you forgetting that ∇PiPf is not proportional to ∇Pf?
To give a toy example: suppose Pi=100.
Then if 90<pf<100 we have 1<PiPf<1.11
If 10<pf<20 we have 5<PiPf<10
The “height of the PDF graph” will not capture these differences in width. This won’t matter much for questions of 100 vs 100k deaths, but it might be relevant for near-existential mortality levels.
For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.
You may be right! Interventions to decrease war deaths may be better conceptualised as preventing deaths within a given severity range, in which case I should not have interpreted lirerally the example in Founders Pledge’s report Philanthropy to the Right of Boom. In general, I think one has to rely on cost-effectiveness analyses to decide what to prioritise.
I am not sure I got the question. In my discussion of Founders Pledge’s example about war deaths, I assumed the value of saving one life to be the same regardless of population size, because this is what they were doing). So I did not use the ratio between the initial and population.