But, in any case, I do not think this works, because philanthropic, private, and government dollars are not fungible, as all groups have different advantages and things they can and cannot do.
If looking at all resources, then 80M for AI safety research also seems an underestimate as this presumably does not include the safety and alignment work at companies?
But, in any case, I do not think this works, because philanthropic, private, and government dollars are not fungible, as all groups have different advantages and things they can and cannot do.
I think I should be considering all sources of funding. Everything else equal, I expect a problem A which receives little philanthropic funding, but lots of funding from other sources, to be less pressing than a problem B which receives little funding from both philanthropic and non-philanthropic sources. The difference between A and B will not be as large as naively expected because philanthropic and non-philanthropic spending are not fungible. However, if one wants to define neglectedness as referring to just the spending from one source, then the scale should also depend on the source, and sources with less spending will be associated with a smaller fraction of the problem.
In general, I feel like the case for using the importance, tractability and neglectedness framework is stronger at the level of problems. Once one starts thinking about considerations within the cause area and increasingly narrow sets of interventions, I would say it is better to move towards cost-effectiveness analyses.
So the nearterm annual extinction risk per annual spending for AI risk is 59.8 M (= 1.69*10^6*35.4) times that for nuclear risk.
Yet, given the above, I would say one should a priori expect efforts to decrease AI extinction risk to be more cost-effective at the current margin than ones to decrease nuclear extinction risk. Note: the sentence just above already includes the correction I will mention below.
I can’t open the GDoc on AI safety research.
Sorry! I have fixed the link now.
If looking at all resources, then 80M for AI safety research also seems an underestimate as this presumably does not include the safety and alignment work at companies?
It actually did not include spending from for-profit companies. I thought it included because I had seen they estimated just a few tens of millions of dollars coming from them:
I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
I am fairly convinced your basic point will stand, but it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds.
I got this was your point, but I am not convinced it holds. I would be curious to understand which empirical evidence informs your views. Feel free to link to relevant pieces, but no worries if you do not want to engage further.
Many obvious things are not done.
I do not think this necessarily qualifies as satisfy empirical evidence that philanthropic neglectedness means high marginal returns. There may be non-obvious reasons for the ovious interventions not having been picked. In general, I am thinking that for any problem it is always possible to pick a neglected set of interventions, but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
For reference, here is some more context on 80,000 Hours’ profile:
Who is working on this problem?
The area is a significant focus for governments, security agencies, and intergovernmental organisations.
Within the nuclear powers, some fraction of all work dedicated to foreign policy, diplomacy, military, and intelligence is directed at ensuring nuclear war does not occur. While it is hard to know exactly how much, it is likely to be in the billions of dollars or more in each country.
The US budget for nuclear weapons is comfortably in the tens of billions.8 Some significant fraction of this is presumably dedicated to control, safety, and accurate detection of attacks on the US.
In addition to this, some intergovernmental organisations devote substantial funding to nuclear security issues. For example, in 2016, the International Atomic Energy Agency had a budget of €361 million.9 Total philanthropic nuclear risk spending in 2021 was approximately $57–190 million.
The spending of 4.04 G$ I mentioned is just 4.87 % (= 4.04/82.9) on the cost of maintaining and modernising nuclear weapons in 2022 of 82.9 G$.
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
Good point. I guess the quality-adjusted contribution from those sources is currently small, but that it will become very significant in the next few years or decades.
I am fairly convinced your basic point will stand
Agreed. I estimated a difference of 8 OOMs (factor of 59.8 M) in the nearterm annual extinction risk per funding.
it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
Agreed. On the other hand, I would rather see discussions move from neglectedness towards cost-effectiveness analyses.
but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
I think this is fundamentally the crux—many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.
Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.
I can’t open the GDoc on AI safety research.
But, in any case, I do not think this works, because philanthropic, private, and government dollars are not fungible, as all groups have different advantages and things they can and cannot do.
If looking at all resources, then 80M for AI safety research also seems an underestimate as this presumably does not include the safety and alignment work at companies?
I think I should be considering all sources of funding. Everything else equal, I expect a problem A which receives little philanthropic funding, but lots of funding from other sources, to be less pressing than a problem B which receives little funding from both philanthropic and non-philanthropic sources. The difference between A and B will not be as large as naively expected because philanthropic and non-philanthropic spending are not fungible. However, if one wants to define neglectedness as referring to just the spending from one source, then the scale should also depend on the source, and sources with less spending will be associated with a smaller fraction of the problem.
In general, I feel like the case for using the importance, tractability and neglectedness framework is stronger at the level of problems. Once one starts thinking about considerations within the cause area and increasingly narrow sets of interventions, I would say it is better to move towards cost-effectiveness analyses.
Yet, given the above, I would say one should a priori expect efforts to decrease AI extinction risk to be more cost-effective at the current margin than ones to decrease nuclear extinction risk. Note: the sentence just above already includes the correction I will mention below.
Sorry! I have fixed the link now.
It actually did not include spending from for-profit companies. I thought it included because I had seen they estimated just a few tens of millions of dollars coming from them:
I have now modified the relevant bullet in my analysis to the following:
My point remains qualitatively the same, as the spending on decreasing AI extinction risk only increased by 42.9 % (= 114⁄79.8 − 1).
(Last comment from me on this for time reasons)
I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
The numbers on nuclear risk spending by 80k are entirely made up and not described otherwise (e.g. they do not cite a source and make no effort justifying the estimate, this is clearly a wild guess).
If one constructed a similar number for AI risk, it could also be in the billions given it would presumably include stuff like the costs of government bureaucracies involved in tech regulation, emerging legislation etc.
I am fairly convinced your basic point will stand, but it seems important to not overplay the degree to which nuclear risk is not neglected, and to not underplay the degree to which government actors and others are now paying attention to AI risk (obviously, this also needs to be quality discounted, but this discounting does not reduce the value much for nuclear in your estimate).
Thanks for elaborating.
I got this was your point, but I am not convinced it holds. I would be curious to understand which empirical evidence informs your views. Feel free to link to relevant pieces, but no worries if you do not want to engage further.
I do not think this necessarily qualifies as satisfy empirical evidence that philanthropic neglectedness means high marginal returns. There may be non-obvious reasons for the ovious interventions not having been picked. In general, I am thinking that for any problem it is always possible to pick a neglected set of interventions, but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.
For reference, here is some more context on 80,000 Hours’ profile:
The spending of 4.04 G$ I mentioned is just 4.87 % (= 4.04/82.9) on the cost of maintaining and modernising nuclear weapons in 2022 of 82.9 G$.
Good point. I guess the quality-adjusted contribution from those sources is currently small, but that it will become very significant in the next few years or decades.
Agreed. I estimated a difference of 8 OOMs (factor of 59.8 M) in the nearterm annual extinction risk per funding.
Agreed. On the other hand, I would rather see discussions move from neglectedness towards cost-effectiveness analyses.
I think this is fundamentally the crux—many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.
Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.