Hello again Alex,
You discuss the allocation of funds across your 2 main areas, global health and wellbeing (GHW) and global catastrophic risks (GCR), but (as before) you do not say anything about the allocation across animal and human interventions in the GHW portfolio. I assume you do not think the funding going towards animal welfare interventions should be greatly increased, but I would say you should at least be transparent about your views.
For reference, I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 13.6 DALY/$ (= 0.01*1.37*10^3), i.e. 680 (= 13.6/0.02) times Open Philanthropy’s bar. I got that multiplying:
The cost-effectiveness of GiveWell’s top charities of 0.01 DALY/$ (50 DALY per 5 k$), which is half of Open Philanthropy’s bar of 0.02 DALY/$.
My estimate for the ratio between cost-effectiveness of corporate campaigns for chicken welfare and GiveWell’s top charities of 1.37 k (= 1.71*10^3/0.682*2.73/5):
I calculated corporate campaigns for broiler welfare increase neaterm welfare 1.71 k times as cost-effectively as the lowest cost to save a life among GiveWell’s top charities then of 3.5 k$, respecting a cost-effectiveness of 0.286 life/k$ (= 1/(3.5*10^3)).
The current mean reciprocal of the cost to save a life of GiveWell’s 4 top charities is 0.195 life/k$ (= (3*1/5 + 1⁄5.5)*10^-3/4), i.e. 68.2 % (= 0.195/0.286) as high as the cost-effectiveness I just mentioned.
The ratio of 1.71 k in the 1st bullet respects campaigns for broiler welfare, but Saulius estimated ones for chicken welfare (broilers or hens) affect 2.73 (= 41⁄15) as many chicken-years.
OP thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis”.
Great post!
It is worth noting most of the expected value of reducing existential risk comes from worlds where the time of perils hypothesis (TOP) is true, and the post-peril risk is low (the longterm future should be discounted at the ~lowest possible rate). In this case, a reduction in existential risk in the next 100 years would not differ much from a reduction in total existential risk, and therefore the mistakes you mention do not apply.
To give an example. If existential risk is 10 % per century for 3 centuries[1], and then drops to roughly 0, the risk in the next 3 centuries is 27.1000 % (= 1 - (1 − 0.1)^3). If one decreases bio risk by 1 % for 1 century, from 1 % to 0.99 % (i.e. 0.01 pp), the new risk for the next century would be 9.99 % (= 10 − 0.01). So the new risk for the next 3 centuries would be 27.0919 % (= 1 - (1 − 0.0999)*(1 − 0.1)^2). Therefore the reduction of the total risk would be 0.008 pp (= 27.1000 − 27.0919), i.e. very similar to the reduction of bio risk during the next century of 0.01 pp.
As a result, under TOP, I think reducing bio existential risk by 0.01 pp roughly decreases total existential risk by 0.01 pp. For the conservative estimate of 10^28 expected future lives given in Newberry 2021 (Table 3), that would mean saving 10^24 (= 10^(28 − 4)) lives, or 4*10^12 life/$ (= 10^24/(250*10^9)). If TOP only has 1 in a trillion chance of being true, the cost-effectiveness would be 4 life/$, over 4 OOMs better than GiveWell’s top charities cost-effectiveness of 2.5*10^-4 life/$ (= 1⁄4000).
On the one hand, I am very uncertain about how high is bio existential risk this century. If it is something like 10^-6 (i.e. 0.01 % of what I assumed above), the cost-effectiveness of reducing bio risk would be similar to that of GiveWell’s top charities. On the other hand, 1 in a trillion chance for TOP being true sounds too low, and a future value of 10^28 lives is probably an underestimate. Overall, I guess longtermist interventions will tend to be much more cost-effective.
FWIW, I liked David’s series on Existential risk pessimism and the time of perils. I agree there is a tension between high existential risk this century, and TOP being reasonably likely. I guess existential risk is not as high as commonly assumed, because superintelligent AI disempowering humans does not have to lead to loss of value under moral realism, but I do not know.
In The Precipice, Toby Ord guesses total existential risk to be 3 times (= (1/2)/(1/6)) that from 2021 to 2120.