I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
I am fairly convinced your basic point will stand, but it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds.
I got this was your point, but I am not convinced it holds. I would be curious to understand which empirical evidence informs your views. Feel free to link to relevant pieces, but no worries if you do not want to engage further.
Many obvious things are not done.
I do not think this necessarily qualifies as satisfy empirical evidence that philanthropic neglectedness means high marginal returns. There may be non-obvious reasons for the ovious interventions not having been picked. In general, I am thinking that for any problem it is always possible to pick a neglected set of interventions, but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
For reference, here is some more context on 80,000 Hours’ profile:
Who is working on this problem?
The area is a significant focus for governments, security agencies, and intergovernmental organisations.
Within the nuclear powers, some fraction of all work dedicated to foreign policy, diplomacy, military, and intelligence is directed at ensuring nuclear war does not occur. While it is hard to know exactly how much, it is likely to be in the billions of dollars or more in each country.
The US budget for nuclear weapons is comfortably in the tens of billions.8 Some significant fraction of this is presumably dedicated to control, safety, and accurate detection of attacks on the US.
In addition to this, some intergovernmental organisations devote substantial funding to nuclear security issues. For example, in 2016, the International Atomic Energy Agency had a budget of €361 million.9 Total philanthropic nuclear risk spending in 2021 was approximately $57–190 million.
The spending of 4.04 G$ I mentioned is just 4.87 % (= 4.04/82.9) on the cost of maintaining and modernising nuclear weapons in 2022 of 82.9 G$.
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
Good point. I guess the quality-adjusted contribution from those sources is currently small, but that it will become very significant in the next few years or decades.
I am fairly convinced your basic point will stand
Agreed. I estimated a difference of 8 OOMs (factor of 59.8 M) in the nearterm annual extinction risk per funding.
it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
Agreed. On the other hand, I would rather see discussions move from neglectedness towards cost-effectiveness analyses.
but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
I think this is fundamentally the crux—many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.
Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.
(Last comment from me on this for time reasons)
I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
I am fairly convinced your basic point will stand, but it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
Thanks for elaborating.
I got this was your point, but I am not convinced it holds. I would be curious to understand which empirical evidence informs your views. Feel free to link to relevant pieces, but no worries if you do not want to engage further.
I do not think this necessarily qualifies as satisfy empirical evidence that philanthropic neglectedness means high marginal returns. There may be non-obvious reasons for the ovious interventions not having been picked. In general, I am thinking that for any problem it is always possible to pick a neglected set of interventions, but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
For reference, here is some more context on 80,000 Hours’ profile:
The spending of 4.04 G$ I mentioned is just 4.87 % (= 4.04/82.9) on the cost of maintaining and modernising nuclear weapons in 2022 of 82.9 G$.
Good point. I guess the quality-adjusted contribution from those sources is currently small, but that it will become very significant in the next few years or decades.
Agreed. I estimated a difference of 8 OOMs (factor of 59.8 M) in the nearterm annual extinction risk per funding.
Agreed. On the other hand, I would rather see discussions move from neglectedness towards cost-effectiveness analyses.
I think this is fundamentally the crux—many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.
Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.