Great post—I’m embarrassed to have missed it til now! One key point I disagree with:
there might be interventions that reduce risk a lot for not very long or not very much but for a long time. But actions that drastically reduce risk and do so for a long time are rare.
I think there are two big possible exceptions to the latter claim: benign AI and becoming sustainably multiplanetary. EAs have discussed the former a lot, and I don’t have much to add (though I’m highly sceptical of it as an arbitrary-value lock-in mechanism on cosmic timelines). I think the latter is more interestingly unexplored. Christopher Lankhof made a case for it here, but didn’t get much engagement, and what criticism he did get seems quite short-term to me: basically that shelters are a cheaper option, and therefore we should prioritise them.
Such criticism might or might not be true in the next few decades. But beyond that, if AI neither kills us nor locks us in to a dystopic or utopic path, and if there are no lightcone-threatening technologies available (e.g. the potential ability to trigger a false vacuum decay), then it seems like by far our best defence against extinction will be simple numbers. The more intelligent life there is in the more places, the bigger and therefore more improbable an event would have to be to kill everyone.
A naive—but I think reasonable, given above caveats—calculation would be to treat the destruction of life around each planet as at least somewhat independent. That would give us some kind of exponential decay function of extinction risk, such that your credence in extinction might be a(1-b)^(p-1), where a is some constant or function representing the risk of a single-planet civilisation going extinct, b is some decay rate—of max(1/2) for total complete independence of extinction on each planet—and p is the number of planets in your civilisation. Absent universe-destroying mechanisms or unstoppable AI, this credence would quickly approach 0.
Obviously ‘creating an self-sustaining settlement on a new planet’ isn’t exactly an everyday occurrence, but with a century or two of continuous technological progress (less, given rapid economic acceleration via e.g. moderately benign AI) it seems likely to progress via ‘doable’ to ‘actually pretty straightforward’. The same technologies that establish the first such colony will go a very long way towards establishing the next few.
In the shorter term, ‘self-sustainingness’ needn’t be an all or nothing proposition. A colony that could e.g. effectively recycle its nutrients for a decade or two would still likely serve as a better defence against e.g. biopandemics than any refuge on Earth—and unlike those on Earth, would be constantly pressure tested even before the apocalypse, so might end up being easier to make reliably robust (vs on-Earth shelters) than simple cost-analyses would suggest.
Thank you for adding various threads to the conversation Arepo! I don’t disagree with what I take to be your main point: benign AI and interstellar travel are likely to have a big impact.
I will say though, while their success might significantly reduce risk, and for a long time, any given intervention is unlikely to make major progress towards them. Hence, at the intervention level, I’m tempted to remain sceptical about the abundance of interventions that dramatically reduce risk for a long time.
Great post—I’m embarrassed to have missed it til now! One key point I disagree with:
I think there are two big possible exceptions to the latter claim: benign AI and becoming sustainably multiplanetary. EAs have discussed the former a lot, and I don’t have much to add (though I’m highly sceptical of it as an arbitrary-value lock-in mechanism on cosmic timelines). I think the latter is more interestingly unexplored. Christopher Lankhof made a case for it here, but didn’t get much engagement, and what criticism he did get seems quite short-term to me: basically that shelters are a cheaper option, and therefore we should prioritise them.
Such criticism might or might not be true in the next few decades. But beyond that, if AI neither kills us nor locks us in to a dystopic or utopic path, and if there are no lightcone-threatening technologies available (e.g. the potential ability to trigger a false vacuum decay), then it seems like by far our best defence against extinction will be simple numbers. The more intelligent life there is in the more places, the bigger and therefore more improbable an event would have to be to kill everyone.
A naive—but I think reasonable, given above caveats—calculation would be to treat the destruction of life around each planet as at least somewhat independent. That would give us some kind of exponential decay function of extinction risk, such that your credence in extinction might be a(1-b)^(p-1), where a is some constant or function representing the risk of a single-planet civilisation going extinct, b is some decay rate—of max(1/2) for total complete independence of extinction on each planet—and p is the number of planets in your civilisation. Absent universe-destroying mechanisms or unstoppable AI, this credence would quickly approach 0.
Obviously ‘creating an self-sustaining settlement on a new planet’ isn’t exactly an everyday occurrence, but with a century or two of continuous technological progress (less, given rapid economic acceleration via e.g. moderately benign AI) it seems likely to progress via ‘doable’ to ‘actually pretty straightforward’. The same technologies that establish the first such colony will go a very long way towards establishing the next few.
In the shorter term, ‘self-sustainingness’ needn’t be an all or nothing proposition. A colony that could e.g. effectively recycle its nutrients for a decade or two would still likely serve as a better defence against e.g. biopandemics than any refuge on Earth—and unlike those on Earth, would be constantly pressure tested even before the apocalypse, so might end up being easier to make reliably robust (vs on-Earth shelters) than simple cost-analyses would suggest.
Thank you for adding various threads to the conversation Arepo! I don’t disagree with what I take to be your main point: benign AI and interstellar travel are likely to have a big impact. I will say though, while their success might significantly reduce risk, and for a long time, any given intervention is unlikely to make major progress towards them. Hence, at the intervention level, I’m tempted to remain sceptical about the abundance of interventions that dramatically reduce risk for a long time.