I would be curious to understand why you continue to focus exclusively on philanthropic funding. I think a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding:
According to Founders Pledge’s report on nuclear risk, “total philanthropic nuclear security funding stood at about $47 million per year [“between 2014 and 2020″]”.
Based on 80,000 Hours’ profile on nuclear war, I estimate total funding is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to the lower and upper bound on the profile of 1 and 10 G$.
Focussing on a large relative reduction of a minor fraction of the funding makes it look like neglectedness increased a lot, but this is not the case based on the above. I think it is better to consider spending from other sources because these also contribute towards decreasing risk. In addition, I would not weight spending by cost-effectiveness (and much less give 0 weight to spending not aligned with effective altruism[1]), as this is what one is trying to figure out when using spending/neglectedness as an heuristic.
More importantly, I think you had better focus on assessing cost-effectiveness of representative promising interventions rather than funding:
Cost-effectiveness is what one ultimately cares about.
Cost-effectiveness can be relatively easily estimated for interventions aiming to decrease global catastrophic risk, which requires saving lives in expectation.
You think differences in cost-effectiveness across areas are much more significant than ones across interventions within an area:
“Perhaps the top 2.5% of measurable interventions within a cause area are actually 3–10 times better than the mean of measurable interventions [...]”.
“[...] in terms of effectiveness, it’s more important to choose the right broad area to work in than it is to identify the best solution within a given area”.
The level of funding is subject to quite arbitrary boundaries around what is considered nuclear security.
Likewise for the other 4 80,000 Hours’ most pressing problems. For example, I assume the funding of and number of people working on AI safety is pretty sensible to what is considered safety instead of capabilities, and it looks like there is not a clear distiction between the 2.
Christian Ruhl estimated that doubling nuclear risk reduction spending (which he mentions was 32.1 M$ in 2021) would save a life for 1.55 k$, which corresponds to a cost-effectiveness around 3.23 (= 5⁄1.55) times that of GiveWell’s top charities. I think corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell’s top charities, and therefore 446 (= 1.44*10^3/3.23) times as cost-effective as what Christian got for doubling nuclear risk reduction spending.
I think it makes sense to evaluate interventions which aim to decrease nuclear risk in terms of lives saved (or similar) instead of reductions in extinction risk:
I estimated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12, which is astronomically low.
Interventions decreasing the probability of a given relative reduction in population or economic activity (which is how global catastrophic risk is usually defined) still have to save lives in expectation. So one could simply determine their impact in terms of lives saved, but weight more heavily lives saved at a lower population size.
As a side note, I tried this weighting lives saved by the reciprocal of population size, and concluded that saving lives at higher population sizes is more cost-effective assuming the ratio between the initial and final population follows a power law.
I do not think you are doing this here, but I seem to recall cases where only the amount of spending coming from sources aligned with effective altruism was highlighted.
I don’t focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:
I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that ‘preventing nuclear war’ more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.
And the amount of philanthropic funding still matters because certain important types of work in the space can only be funded by philanthropists (e.g. lobbying or other policy efforts you don’t want to originate within a certain national government).
I’d add that if if there’s almost no EA-inspired funding in a space, there’s likely to be some promising gaps by someone applying that mindset.
In general, it’s a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it’s also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).
--
Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust. Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.
E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective. But once you’ve spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.
I’d add that if if there’s almost no EA-inspired funding in a space, there’s likely to be some promising gaps by someone applying that mindset.
Agreed, although my understanding is that you think the gains are often exagerated. You said:
Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.
Again, if the gain is just a factor of 3 to 10, then it makes all sense to me to focus on cost-effectiveness analyses rather than funding.
In general, it’s a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it’s also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).
Agreed. However, deciding how much to weight a given relative drop in a fraction of funding (e.g. philanthropic funding) requires understanding its cost-effectiveness relative to other sources of funding. In this case, it seems more helpful to assess the cost-effectiveness of e.g. doubling philanthropic nuclear risk reduction spending instead of just quantifying it.
Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust.
The product of the 3 factors in the importance, neglectedness and tractability framework is the cost-effectiveness of the area, so I think the increased robustness comes from considering many interventions. However, one could also (qualitatively or quantitatively) aggregate the cost-effectiveness of multiple (decently scalable) representative promising interventions to estimate the overall marginal cost-effectiveness (promisingness) of the area.
Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.
I agree, but I did not mean to argue for deemphasising the concept of cause area. I just think the promisingness of areas had better be assessed by doing cost-effectiveness analyses of representative (decently scalable) promising interventions.
E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective.
To clarify, the estimate for the cost-effectiveness of corporate campaigns I shared above refers to marginal cost-effectiveness, so it does not directly refer to the cost-effectiveness of ending factory-farming (which is far from a marginal intervention).
But once you’ve spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.
My guess would be that the acquired career capital would still be quite useful in the context of the new top interventions, especially considering that welfare reforms have been top interventions for more than 5 years[1]. In addition, if Open Philanthropy is managing their funds well, (all things considered) marginal cost-effectiveness should not vary much across time. If the top interventions in 5 years were expected to be less cost-effective than the current top interventions, it would make sense to direct funds from the worst/later to the best/earlier years until marginal cost-effectiveness is equalised (in the same way that it makes sense to direct funds from the worst to best interventions in any given year).
Open Phil granted 1 M$ to The Humane League’s cage free campaigns in 2016, 7 years ago. Saulius Šimčikas’ analysis of corporate campaigns looks into ones which happened as early as 2005, 19 years ago.
Hi Ben,
I would be curious to understand why you continue to focus exclusively on philanthropic funding. I think a 100 % reduction in philantropic funding would only be a 1.16 % (= 0.047/4.04) relative reduction in total funding:
According to Founders Pledge’s report on nuclear risk, “total philanthropic nuclear security funding stood at about $47 million per year [“between 2014 and 2020″]”.
Based on 80,000 Hours’ profile on nuclear war, I estimate total funding is 4.04 G$, which I got from the mean of a lognormal distribution with 5th and 95th percentile equal to the lower and upper bound on the profile of 1 and 10 G$.
Focussing on a large relative reduction of a minor fraction of the funding makes it look like neglectedness increased a lot, but this is not the case based on the above. I think it is better to consider spending from other sources because these also contribute towards decreasing risk. In addition, I would not weight spending by cost-effectiveness (and much less give 0 weight to spending not aligned with effective altruism[1]), as this is what one is trying to figure out when using spending/neglectedness as an heuristic.
More importantly, I think you had better focus on assessing cost-effectiveness of representative promising interventions rather than funding:
Cost-effectiveness is what one ultimately cares about.
Cost-effectiveness can be relatively easily estimated for interventions aiming to decrease global catastrophic risk, which requires saving lives in expectation.
You think differences in cost-effectiveness across areas are much more significant than ones across interventions within an area:
“Perhaps the top 2.5% of measurable interventions within a cause area are actually 3–10 times better than the mean of measurable interventions [...]”.
“[...] in terms of effectiveness, it’s more important to choose the right broad area to work in than it is to identify the best solution within a given area”.
The level of funding is subject to quite arbitrary boundaries around what is considered nuclear security.
Likewise for the other 4 80,000 Hours’ most pressing problems. For example, I assume the funding of and number of people working on AI safety is pretty sensible to what is considered safety instead of capabilities, and it looks like there is not a clear distiction between the 2.
Christian Ruhl estimated that doubling nuclear risk reduction spending (which he mentions was 32.1 M$ in 2021) would save a life for 1.55 k$, which corresponds to a cost-effectiveness around 3.23 (= 5⁄1.55) times that of GiveWell’s top charities. I think corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell’s top charities, and therefore 446 (= 1.44*10^3/3.23) times as cost-effective as what Christian got for doubling nuclear risk reduction spending.
I think it makes sense to evaluate interventions which aim to decrease nuclear risk in terms of lives saved (or similar) instead of reductions in extinction risk:
I estimated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12, which is astronomically low.
Interventions decreasing the probability of a given relative reduction in population or economic activity (which is how global catastrophic risk is usually defined) still have to save lives in expectation. So one could simply determine their impact in terms of lives saved, but weight more heavily lives saved at a lower population size.
As a side note, I tried this weighting lives saved by the reciprocal of population size, and concluded that saving lives at higher population sizes is more cost-effective assuming the ratio between the initial and final population follows a power law.
I do not think you are doing this here, but I seem to recall cases where only the amount of spending coming from sources aligned with effective altruism was highlighted.
I don’t focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:
I’d add that if if there’s almost no EA-inspired funding in a space, there’s likely to be some promising gaps by someone applying that mindset.
In general, it’s a useful approximation to think of neglectedness as a single number, but the ultimate goal is to find good grants, and to do that it’s also useful to break down neglectedness into different types of resources, and consider related heuristics (e.g. that there was a recent drop).
--
Causes vs. interventions more broadly is a big topic. The very short version is that I agree doing cost-effectiveness estimates of specific interventions is a useful input into cause selection. However, I also think the INT framework is very useful. One reason is it seems more robust. Another reason is that in many practical planning situations that involve accumulating expertise over years (e.g. choosing a career, building a large grantmaking programme) it seems better to focus on a broad cluster of related interventions.
E.g. you could do a cost-effectiveness estimate of corporate campaigns and determine ending factory farming is most cost-effective. But once you’ve spent 5 years building career capital in that factory farming, the available interventions or your calculations about them will likely very different.
Thanks for clarifying, Ben!
Agreed, although my understanding is that you think the gains are often exagerated. You said:
Again, if the gain is just a factor of 3 to 10, then it makes all sense to me to focus on cost-effectiveness analyses rather than funding.
Agreed. However, deciding how much to weight a given relative drop in a fraction of funding (e.g. philanthropic funding) requires understanding its cost-effectiveness relative to other sources of funding. In this case, it seems more helpful to assess the cost-effectiveness of e.g. doubling philanthropic nuclear risk reduction spending instead of just quantifying it.
The product of the 3 factors in the importance, neglectedness and tractability framework is the cost-effectiveness of the area, so I think the increased robustness comes from considering many interventions. However, one could also (qualitatively or quantitatively) aggregate the cost-effectiveness of multiple (decently scalable) representative promising interventions to estimate the overall marginal cost-effectiveness (promisingness) of the area.
I agree, but I did not mean to argue for deemphasising the concept of cause area. I just think the promisingness of areas had better be assessed by doing cost-effectiveness analyses of representative (decently scalable) promising interventions.
To clarify, the estimate for the cost-effectiveness of corporate campaigns I shared above refers to marginal cost-effectiveness, so it does not directly refer to the cost-effectiveness of ending factory-farming (which is far from a marginal intervention).
My guess would be that the acquired career capital would still be quite useful in the context of the new top interventions, especially considering that welfare reforms have been top interventions for more than 5 years[1]. In addition, if Open Philanthropy is managing their funds well, (all things considered) marginal cost-effectiveness should not vary much across time. If the top interventions in 5 years were expected to be less cost-effective than the current top interventions, it would make sense to direct funds from the worst/later to the best/earlier years until marginal cost-effectiveness is equalised (in the same way that it makes sense to direct funds from the worst to best interventions in any given year).
Open Phil granted 1 M$ to The Humane League’s cage free campaigns in 2016, 7 years ago. Saulius Šimčikas’ analysis of corporate campaigns looks into ones which happened as early as 2005, 19 years ago.