Thanks for the suggestion. @Ulrik Horn, who is working on a project related to refuges, may have some thoughts.
Given that many people assert it would be much easier than settling other planets, why hasn’t anyone started building such systems en mass, and how could we remove whatever the blocker is?
I think the reason is that they would be very far from passing a standard cost-benefit analysis. I estimated the cost-effectiveness of decreasing nearterm annual extinction risk from asteroids and comets via refuges is 6.04*10^-10 bp/T$. For a population of 8 billion, and a refuges which remained effective for 10 years, that would be a cost per life saved of 207 T$ (= 10^12/(6.04*10^-10*10^-4*8*10^9*10)), i.e. one would have to spend 2 times the size of the global economy to save a life. In reality, the cost-effectiveness would be much higher because refuges would work in non-extinction catastrophes too, but it would remain very far from passing a standard governmental cost-benefit analysis.
Thanks for the mention. ASB had previously estimated $100M-$300M if I remember correctly. After that, a diverse team specified an “ultimate bunker” and I then used reference class forecasting to arrive at a total cost (including ~20 years of operation) of $200M-$20bn. Yes that range is super wide, but so are uncertainties at this stage. Some examples of drivers of this uncertainty in cost:
Do we need some exotic SMR with complicated cooling systems (expensive) or can we locate the facility near a stable hydro resource (cheaper) and also what is the power need of the bunker (filtering and high ACH can drive this very high)?
Do we need to secure the air intakes from adversaries (potentially super expensive)?
How expensive will operations and maintenance be? We would require all such work to be done safely “from the inside” so that e.g. filter replacement would potentially be super complicated and costly.
There is some disagreement on what is stopping such shelters from being built but we might be about to find out as there are a few people working to see if we can make progress on shelters. That said, if someone was to earmark $10bn for constructing and operating a shelter, I think chances would be quite high of actually building one, so money is definitely a blocker at this point.
On cost effectiveness I would defer to others with better threat models. I am happy to provide cost estimates given some specifications so that people with threat models (i.e. how many % reduction in x-risk does a shelter provide) can calculate such metrics.
Moreover, and perhaps people already do this, but I would also advocate for an “expected x-risk reduction” approach. Compared to e.g. convincing governments about enacting legislation on AI (uncertain if they actually will), a sufficiently funded shelter project does to a much smaller degree depend on actions of others and as such we have more control over the final outcome. And it is quite certain that shelters will give protection at least from catastrophic bio events whereas it could be argued that it is uncertain if a certain approach to AI safety will make the AI safe.
ASB had previously estimated $100M-$300M if I remember correctly. After that, a diverse team specified an “ultimate bunker” and I then used reference class forecasting to arrive at a total cost (including ~20 years of operation) of $200M-$20bn.
Feel free to share links. Your 2nd range suggest a cost of 398 M$[1] (= 10^9/2.51). If such bunker could halve bio extinction risk from 2031 to 2050[2], and one sets this to 0.00269 % based on guesses from XPT’s superforecasters[3], it would reduce extinction risk with a cost-effectiveness of 0.338 bp/G$ (= 0.5*2.69*10^-5/(398*10^6)). For reference, below are some cost-effectiveness bars I collected.
My cost-effectiveness estimate for the bunker exceeds Open Philanthropy’s conservative bar (i.e. my understanding is that their actual is bar; see footnote). However, I think the actual cost-effectiveness of bunkers is way lower than I estimated. I think XPT’s superforecasters overestimated nuclear extinction risk by 6 orders of magnitude, so I guess they are overrating bio extinction risk too.
And it is quite certain that shelters will give protection at least from catastrophic bio events whereas it could be argued that it is uncertain if a certain approach to AI safety will make the AI safe.
Fair point. On the other hand, I think bio extinction is very unlikely to be an existential risk, because I guess another intelligent sentient species would emerge with high probability (relatedly). I wrote that:
Toby would expect an asteroid impact similar to that of the last mass extinction to be an existential catastrophe. Yet, at least ignoring anthropics, I believe the probability of not fully recovering would only be 0.0513 % (= e^(-10^9/(132*10^6))), assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time to go from i) human extinction due to such an asteroid to ii) evolving a species as capable as humans at steering the future. I supposed this on the basis that:
An exponential distribution with a mean of 66 M years describes the time between extinction threats as well as that to go from i) to ii) conditional on no extinction threats.
Given the above, extinction and full recovery are equally likely. So there is a 50 % chance of full recovery, and one should expect the time until full recovery to be 2 times (= 1⁄0.50) as long as that conditional on no extinction threats.
The above evolution could take place in the next 1 billion years during which the Earth will remain habitable.
In contrast, AI causing human extinction would arguably prevent any future Earth-originating species from regaining control over the future. As a counter point to this, AI causing human extinction can be good if the AI is benevolent, but I think this is unlikely if extinction is caused this century.
Reciprocal of the mean of a lognormal distribution describing the reciprocal of the cost with 10th and 90th percentiles equal to 1⁄20 and 1⁄0.2 (G$)^-1. I am using the reciprocal of the cost because the expected cost-effectiveness equals the product between it and expected benefits, not the ratio between expected benefits and cost (E(1/X) differs from 1/E(X)).
XPT’s superforecasters guessed 0.01 % between 2023 and 2100 (see Table 3), which suggests 0.00269 % (= 1 - (1 − 10^-4)^(21/78)) between 2031 and 2050.
As a counter point to this, AI causing human extinction can be good if the AI is benevolent
Uh… I think there’s a lot of load-bearing on words ‘benevolent’ and ‘can be’ here[1]
Like I think outside of the most naïve consequentialism it’d be hard to argue that this would be a moral course of action, or that this state of affairs would be best described as ‘benevolent’ - the AI certainly wouldn’t be being ‘benevolent’ toward humanity
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
I agree it is too outside scope to be discussed here, and I do not think I have enough to say to have a dialogue, but I encourage people interested in this to check Matthew Barnett’s related quick take.
Thanks Vasco, your cost effectiveness estimate is super helpful, thanks for putting that together (I and others have done some already but having more of them helps)!
And I had missed that post on intelligent life re-emerging—I gave your comment a strong upvote because that points to an idea I had not heard before: That one can use the existing evolutionary tree to make prob dists of the likelihood of some branch of that tree evolving brains that could harbor intelligence.
I have not polished much of my work up until now so I prefer to share directly with people interested. And I think if someone would have time to polish my work it would be ok to have it more publicly. That said, we also might want to check for info-hazards—I feel myself becoming more relaxed about this as time goes on and that causes occasional bouts of nervousness (like now!).
Thanks for the suggestion. @Ulrik Horn, who is working on a project related to refuges, may have some thoughts.
I think the reason is that they would be very far from passing a standard cost-benefit analysis. I estimated the cost-effectiveness of decreasing nearterm annual extinction risk from asteroids and comets via refuges is 6.04*10^-10 bp/T$. For a population of 8 billion, and a refuges which remained effective for 10 years, that would be a cost per life saved of 207 T$ (= 10^12/(6.04*10^-10*10^-4*8*10^9*10)), i.e. one would have to spend 2 times the size of the global economy to save a life. In reality, the cost-effectiveness would be much higher because refuges would work in non-extinction catastrophes too, but it would remain very far from passing a standard governmental cost-benefit analysis.
Thanks for the mention. ASB had previously estimated $100M-$300M if I remember correctly. After that, a diverse team specified an “ultimate bunker” and I then used reference class forecasting to arrive at a total cost (including ~20 years of operation) of $200M-$20bn. Yes that range is super wide, but so are uncertainties at this stage. Some examples of drivers of this uncertainty in cost:
Do we need some exotic SMR with complicated cooling systems (expensive) or can we locate the facility near a stable hydro resource (cheaper) and also what is the power need of the bunker (filtering and high ACH can drive this very high)?
Do we need to secure the air intakes from adversaries (potentially super expensive)?
How expensive will operations and maintenance be? We would require all such work to be done safely “from the inside” so that e.g. filter replacement would potentially be super complicated and costly.
There is some disagreement on what is stopping such shelters from being built but we might be about to find out as there are a few people working to see if we can make progress on shelters. That said, if someone was to earmark $10bn for constructing and operating a shelter, I think chances would be quite high of actually building one, so money is definitely a blocker at this point.
On cost effectiveness I would defer to others with better threat models. I am happy to provide cost estimates given some specifications so that people with threat models (i.e. how many % reduction in x-risk does a shelter provide) can calculate such metrics.
Moreover, and perhaps people already do this, but I would also advocate for an “expected x-risk reduction” approach. Compared to e.g. convincing governments about enacting legislation on AI (uncertain if they actually will), a sufficiently funded shelter project does to a much smaller degree depend on actions of others and as such we have more control over the final outcome. And it is quite certain that shelters will give protection at least from catastrophic bio events whereas it could be argued that it is uncertain if a certain approach to AI safety will make the AI safe.
Thanks for the context, Ulrik!
Feel free to share links. Your 2nd range suggest a cost of 398 M$[1] (= 10^9/2.51). If such bunker could halve bio extinction risk from 2031 to 2050[2], and one sets this to 0.00269 % based on guesses from XPT’s superforecasters[3], it would reduce extinction risk with a cost-effectiveness of 0.338 bp/G$ (= 0.5*2.69*10^-5/(398*10^6)). For reference, below are some cost-effectiveness bars I collected.
My cost-effectiveness estimate for the bunker exceeds Open Philanthropy’s conservative bar (i.e. my understanding is that their actual is bar; see footnote). However, I think the actual cost-effectiveness of bunkers is way lower than I estimated. I think XPT’s superforecasters overestimated nuclear extinction risk by 6 orders of magnitude, so I guess they are overrating bio extinction risk too.
Fair point. On the other hand, I think bio extinction is very unlikely to be an existential risk, because I guess another intelligent sentient species would emerge with high probability (relatedly). I wrote that:
In contrast, AI causing human extinction would arguably prevent any future Earth-originating species from regaining control over the future. As a counter point to this, AI causing human extinction can be good if the AI is benevolent, but I think this is unlikely if extinction is caused this century.
Reciprocal of the mean of a lognormal distribution describing the reciprocal of the cost with 10th and 90th percentiles equal to 1⁄20 and 1⁄0.2 (G$)^-1. I am using the reciprocal of the cost because the expected cost-effectiveness equals the product between it and expected benefits, not the ratio between expected benefits and cost (E(1/X) differs from 1/E(X)).
If it was finished at the end of 2030, it would have 20 years of operation as you mentioned.
XPT’s superforecasters guessed 0.01 % between 2023 and 2100 (see Table 3), which suggests 0.00269 % (= 1 - (1 − 10^-4)^(21/78)) between 2031 and 2050.
Uh… I think there’s a lot of load-bearing on words ‘benevolent’ and ‘can be’ here[1]
Like I think outside of the most naïve consequentialism it’d be hard to argue that this would be a moral course of action, or that this state of affairs would be best described as ‘benevolent’ - the AI certainly wouldn’t be being ‘benevolent’ toward humanity
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
And ‘good’, but metaethics will be metaethics
Thanks for the comment, JWS!
I agree it is too outside scope to be discussed here, and I do not think I have enough to say to have a dialogue, but I encourage people interested in this to check Matthew Barnett’s related quick take.
Thanks Vasco, your cost effectiveness estimate is super helpful, thanks for putting that together (I and others have done some already but having more of them helps)!
And I had missed that post on intelligent life re-emerging—I gave your comment a strong upvote because that points to an idea I had not heard before: That one can use the existing evolutionary tree to make prob dists of the likelihood of some branch of that tree evolving brains that could harbor intelligence.
I have not polished much of my work up until now so I prefer to share directly with people interested. And I think if someone would have time to polish my work it would be ok to have it more publicly. That said, we also might want to check for info-hazards—I feel myself becoming more relaxed about this as time goes on and that causes occasional bouts of nervousness (like now!).