Again I want to stress that the point here is not to debate these numbers (I was told that a decrease of the extinction risk of 0.1 percentage point for an investment of 0.1% of the world GDP was reasonable, but found it difficult to find references; I would appreciate comments pointing to relevant references).
If I understand you correctly, this is a one time expenditure, so we are talking about ~$80 billion. This is a model that considered $3 billion being spent on AGI safety. It was a marginal analysis, but I think many would agree that it would address a large fraction of the AGI risk, which is a large fraction of the total existential risk. So if it reduced existential risk overall by one percentage point, that would be 2.5 orders of magnitude more cost-effective than you have assumed, which is much better than your growth assumption. Investment into nuclear winter resilience has similar or even better returns. So I think we could be spending a lot more money on existential risk mitigation that would still be no regrets even with continued exponential growth of utility.
But, to give some substance to it, if we very crudely conflate our utility function with world GDP, then I think it is reasonable to place a return of at least a factor of 10 on some of the better growth investments.
If I understand you correctly, this one-time investment of 0.1% of GDP increases the GDP by 1% above the business as usual for all time. So if you look over one century without discounting, that looks like you have gotten a benefit to cost ratio of 1000. I think there has been discussion about how we have increased our R&D dramatically over the past few decades, but GDP growth has not increased. So maybe someone can jump in with the marginal returns for R&D. Or maybe you had something else in mind?
Incidentally, this would go some way into mitigating Fermi’s paradox: maybe other advanced civilizations have not come to visit us because they are mostly busy optimizing their surrounding environment, and don’t care all that much about colonizing space.
This sounds like an argument in the Age of Em, that once we accelerate our thought processes, expanding into space would be too painfully slow.
The references you put up look really very interesting! I think part of my discomfort comes from my not being aware of such attempts to try to estimate the actual impact of such-and-such risk intervention. I’m very happy to discover them, I wish it was easier to find them!
Also, my wild guess is that if the existential risk intervention came out as cost effective for the present generation, then it may pass your test even with continued exponential growth in utility.
Very interesting!
If I understand you correctly, this is a one time expenditure, so we are talking about ~$80 billion. This is a model that considered $3 billion being spent on AGI safety. It was a marginal analysis, but I think many would agree that it would address a large fraction of the AGI risk, which is a large fraction of the total existential risk. So if it reduced existential risk overall by one percentage point, that would be 2.5 orders of magnitude more cost-effective than you have assumed, which is much better than your growth assumption. Investment into nuclear winter resilience has similar or even better returns. So I think we could be spending a lot more money on existential risk mitigation that would still be no regrets even with continued exponential growth of utility.
If I understand you correctly, this one-time investment of 0.1% of GDP increases the GDP by 1% above the business as usual for all time. So if you look over one century without discounting, that looks like you have gotten a benefit to cost ratio of 1000. I think there has been discussion about how we have increased our R&D dramatically over the past few decades, but GDP growth has not increased. So maybe someone can jump in with the marginal returns for R&D. Or maybe you had something else in mind?
This sounds like an argument in the Age of Em, that once we accelerate our thought processes, expanding into space would be too painfully slow.
The references you put up look really very interesting! I think part of my discomfort comes from my not being aware of such attempts to try to estimate the actual impact of such-and-such risk intervention. I’m very happy to discover them, I wish it was easier to find them!
Also, my wild guess is that if the existential risk intervention came out as cost effective for the present generation, then it may pass your test even with continued exponential growth in utility.