Philosophically, we take the more conservative person-affecting view, in looking specifically at the welfare of actual people, whether present or future – as opposed to contingent/merely potential people that would not exist if not for our intervention (or lack thereof).
Under the totalist view, this cause area would naturally be even more cost-effective – roughly 6.4x more, insofar as any person saved now will have children, who will go on to have children too and so on, such that (given expected future birth and death rates, plus relevant discount rates) counterfactually 6.4 lives are created/maintained by the averting of one death.
You only have a small ratio due to the totalist view because you have a constant exponential discounting for existential risk. Most people think that if we make it through a few centuries and start settling the galaxy, existential risk will fall dramatically, and so the expected number of human (or digital) lives becomes many orders of magnitude greater.
Using a person affecting view, we found for spending a few hundred million dollars on research, development and planning (you don’t have to change the food system ahead of time to increase the chance of a good outcome significantly), the cost per life saved was $0.20 to $400, which is 1 to 4 orders of magnitude more cost-effective than GiveWell charities, so your number is near our most optimistic number.
If I understand you correctly:
This yields a probability of advocacy success of 17% [outside view]...
Multiplying these rates together yields the probability of persuading the United States, Russia and China to limit the size of their nuclear arsenals: 0.0000021% [inside view]...
Consequently, I end up weighing the far more conservative inside view more than the comparatively optimistic outside view – yielding a probability of advocacy success of 1.5%.
This could make sense if you started with an arithmetic mean. But with very large variation in size of numbers, the more appropriate mean is the geometric mean, which would be 0.006%. So then weighting the inside view similarly in logarithmic space as you have done in linear space could mean 0.00001% chance of success, which I think would then result in significantly worse cost-effectiveness than GiveWell.
Big fan of ALLFED’s work! Good point on the issue of arithmetic vs geometric means—it’s something I’m trying to think more about. On falling discount rates; I may be wrong, but some of the testing I did finds that declining discount rates doesn’t materially affect your headline cost-effectiveness estimate too much (since a lot of the discounting is already baked in at earlier years + the effects are swamped in the long run future by a constant uncertainty discount, as CEARCH uses)
Even though it is very unlikely that all of the three countries would dramatically reduce their arsenals if it is uncorrelated, if they are correlated, but I think it would become more likely. Also, if you could just get one country to reduce arsenals, this would reduce the expected damage of the nuclear war significantly, so then I think it would be competitive cost effectiveness.
As a simple example, if one thinks there is a 1% chance of settling the galaxy (lots of X risk, but then X security) with Dyson spheres that last 1 billion years, then I think this is around 10^33 expected future biological human lives. With digital minds, it would be far higher.
So then weighting the inside view similarly in logarithmic space as you have done in linear space could mean 0.00001% chance of success, which I think would then result in significantly worse cost-effectiveness than GiveWell.
Right, then lobbying for arsenal limitation would become 3.12 % (= 0.17^(1/11)*(2.1*10^-8)^(10/11)/0.015*5247) as cost-effective as GiveWell’s top charities.
I’ve generally moved to the view that geomeans are better in cases where the different estimates don’t capture a real difference but rather a difference in methodology (while using the arithmetic makes sense when we are capturing a real difference, e.g. if an intervention affects a bunch of people differently).
In any case, this report is definitely superseded/out-of-date; Stan’s upcoming final report on abrupt sunlight reduction scenarios is far more representative of CEARCH’s current thinking on the issue. (Thanks for your inputs on ASRS, by the way, Vasco!)
I’ve generally moved to the view that geomeans are better in cases where the different estimates don’t capture a real difference but rather a difference in methodology (while using the arithmetic makes sense when we are capturing a real difference, e.g. if an intervention affects a bunch of people differently).
This makes sense to me.
In any case, this report is definitely superseded/out-of-date; Stan’s upcoming final report on abrupt sunlight reduction scenarios is far more representative of CEARCH’s current thinking on the issue.
Cool; I am looking forward to it! I assume you will also do an intermediate report on arsenal limitation at some point.
Impressive analysis on an important topic!
You only have a small ratio due to the totalist view because you have a constant exponential discounting for existential risk. Most people think that if we make it through a few centuries and start settling the galaxy, existential risk will fall dramatically, and so the expected number of human (or digital) lives becomes many orders of magnitude greater.
Using a person affecting view, we found for spending a few hundred million dollars on research, development and planning (you don’t have to change the food system ahead of time to increase the chance of a good outcome significantly), the cost per life saved was $0.20 to $400, which is 1 to 4 orders of magnitude more cost-effective than GiveWell charities, so your number is near our most optimistic number.
If I understand you correctly:
This could make sense if you started with an arithmetic mean. But with very large variation in size of numbers, the more appropriate mean is the geometric mean, which would be 0.006%. So then weighting the inside view similarly in logarithmic space as you have done in linear space could mean 0.00001% chance of success, which I think would then result in significantly worse cost-effectiveness than GiveWell.
Big fan of ALLFED’s work! Good point on the issue of arithmetic vs geometric means—it’s something I’m trying to think more about. On falling discount rates; I may be wrong, but some of the testing I did finds that declining discount rates doesn’t materially affect your headline cost-effectiveness estimate too much (since a lot of the discounting is already baked in at earlier years + the effects are swamped in the long run future by a constant uncertainty discount, as CEARCH uses)
Thanks!
Even though it is very unlikely that all of the three countries would dramatically reduce their arsenals if it is uncorrelated, if they are correlated, but I think it would become more likely. Also, if you could just get one country to reduce arsenals, this would reduce the expected damage of the nuclear war significantly, so then I think it would be competitive cost effectiveness.
As a simple example, if one thinks there is a 1% chance of settling the galaxy (lots of X risk, but then X security) with Dyson spheres that last 1 billion years, then I think this is around 10^33 expected future biological human lives. With digital minds, it would be far higher.
Nice points, David!
Right, then lobbying for arsenal limitation would become 3.12 % (= 0.17^(1/11)*(2.1*10^-8)^(10/11)/0.015*5247) as cost-effective as GiveWell’s top charities.
I’ve generally moved to the view that geomeans are better in cases where the different estimates don’t capture a real difference but rather a difference in methodology (while using the arithmetic makes sense when we are capturing a real difference, e.g. if an intervention affects a bunch of people differently).
In any case, this report is definitely superseded/out-of-date; Stan’s upcoming final report on abrupt sunlight reduction scenarios is far more representative of CEARCH’s current thinking on the issue. (Thanks for your inputs on ASRS, by the way, Vasco!)
This makes sense to me.
Cool; I am looking forward to it! I assume you will also do an intermediate report on arsenal limitation at some point.