AI safety remains underfunded by more than 3 OOMs

Link post

This is a link post for Charles I. Jone’s paper: How Much Should We Spend to Reduce A.I.’s Existential Risk?

Two caveats- I am not Professor Jones, and I take no credit for the linked work. Also this post was written with the help of AI.


Summary of the paper

Stanford economist Charles I. Jones uses standard cost-benefit analysis to estimate how much the US should spend on AI safety. His conclusion: between 1-8% of GDP annually, or roughly $290 billion to $1.5 trillion per year for the US alone.

The core logic is straightforward. US policymakers value a statistical life at around $10 million. To avoid a 1% mortality risk, this implies willingness to pay $100,000 per person. If AI poses similar or greater existential risks over the next decade (which many AI researchers believe it does), comparable investment levels are justified.


Critically, these numbers don’t require any concern for future generations. Jones explicitly models a “selfish” scenario that only values currently living people, and still finds massive spending justified.

Why I think this paper matters

I think this paper helps gives a perspective of how underfunded AI safety remains, despite fairly rapid growth in funding over the past decade.

Global AI safety spending in 2024 was estimated at just over $100m. Jones’s analysis suggests the US alone should be spending 3,000-15,000 times more on AI safety, even without taking non-US or future lives into account.


I know that the idea that existential risk reduction is underfunded is unlikely to take many EAF readers by surprise. However, I think that this paper is worth highlighting. Mainstream economics is a powerful means of both elucidation and legitimation. As J.S. Keynes said: “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

Even someone as ‘galaxy brained’ and ‘AGI-pilled’ as Tyler Cowen once reportedly said: “[I] would start listening when the AI risk people published in a top economics journal showing the risk is real”.