I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the ‘ordinary’ cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord’s probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I’ve seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities.
I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the ‘ordinary’ cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord’s probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I’ve seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities.