Re 1, we needn’t be talking about planets. In principle, any decently sized rocky body in the asteroid belt can be a colony, or you could just build O’Neill cylinders. They might not be long-term sustainable without importing resources, but doing so wouldn’t be a problem in a lot of catastrophic scenarios, eg where some major shock destroyed civilisation on planets and left behind most of the minerals. In this scenario ‘self-sustainability’ is more like a scale than a distinct property, and having more sustainable-ish colonies seems like it would still dramatically increase resilience.
At some point you’ll still hit a physical limit of matter in the system, so such a growth rate wouldn’t last that long, but for this discussion it wouldn’t need to. Even just having colonies on the rocky planets and major moons would reduce the probability of any event that didn’t intentionally target all outposts getting them all would be much closer to zero. At 2^n growth rate (which actually seems very conservative to me in the absence of major catastrophes—Earth alone seems like it could hit that growth rate for a few centuries) I feel like you’d have reduced the risk of non-targeted catastrophes to effectively zero by the time you had maybe 10 colonies?
Re 2, I think we’re disagreeing where you say we’re agreeing :P—I think the EA movement probably overestimates the probability of ‘recovery’ from global catastrophe, esp where ‘recovery’ really means ‘get all the way to the glorious Virgo supercluster future’. If I’m right then they’re effectively existential risks with a 0.1 multiplier, or whatever you think the probability of non-recovery is.
In scenarios such as the sleeper virus, it seems like more colonies would still provide resilience. Presumably if it’s possible to create such a virus it’s possible to detect and neutralise it before its activation, and the probability of doing so is some function of time—which more colonies would give you more of, if you couldn’t activate it til it had infected everyone. I feel like this principle would generalise to almost any technological threat that was in principle reversible.
Hey :)
Re 1, we needn’t be talking about planets. In principle, any decently sized rocky body in the asteroid belt can be a colony, or you could just build O’Neill cylinders. They might not be long-term sustainable without importing resources, but doing so wouldn’t be a problem in a lot of catastrophic scenarios, eg where some major shock destroyed civilisation on planets and left behind most of the minerals. In this scenario ‘self-sustainability’ is more like a scale than a distinct property, and having more sustainable-ish colonies seems like it would still dramatically increase resilience.
At some point you’ll still hit a physical limit of matter in the system, so such a growth rate wouldn’t last that long, but for this discussion it wouldn’t need to. Even just having colonies on the rocky planets and major moons would reduce the probability of any event that didn’t intentionally target all outposts getting them all would be much closer to zero. At 2^n growth rate (which actually seems very conservative to me in the absence of major catastrophes—Earth alone seems like it could hit that growth rate for a few centuries) I feel like you’d have reduced the risk of non-targeted catastrophes to effectively zero by the time you had maybe 10 colonies?
Re 2, I think we’re disagreeing where you say we’re agreeing :P—I think the EA movement probably overestimates the probability of ‘recovery’ from global catastrophe, esp where ‘recovery’ really means ‘get all the way to the glorious Virgo supercluster future’. If I’m right then they’re effectively existential risks with a 0.1 multiplier, or whatever you think the probability of non-recovery is.
In scenarios such as the sleeper virus, it seems like more colonies would still provide resilience. Presumably if it’s possible to create such a virus it’s possible to detect and neutralise it before its activation, and the probability of doing so is some function of time—which more colonies would give you more of, if you couldn’t activate it til it had infected everyone. I feel like this principle would generalise to almost any technological threat that was in principle reversible.