This is a cool piece of work! I have one criticism, which is much the same as my criticism of Thorstad’s argument:
However, endorsing this view likely requires fairly speculative claims about how existing risks will nearly disappear after the time of perils has ended.
I think not believing this requires fairly speculative claims if a potential ‘end of time of perils’ we envisage is just human descendants spreading out across planets and then stars. Keeping current nonspeculative risks (eg nukes, pandemics, natural disasters) approximately constant per unit volume, the risk to all of human descendants would rapidly approach 0 as the volume we inhabited increased.
So for it to stay anywhere near constant, you need to posit that there’s some risk that is equally as capable of killing an interstellar civilisation as a single-planet one. This could be misaligned AGI, but AGI development isn’t constant—if there’s something that stops us from creating it in the next 1000 years, that something might be evidence that we’ll never create it. If we have created it by then, and it hasn’t killed us, then it seems likely that it never will.
So you need something else, like the possibility of triggering false vacuum decay, to imagine a ‘baseline risk’ scenario.
Positing an interstellar civilization seems to be exactly what Thorstad might call a “speculative claim” though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]
Certainly this doesn’t seem like a less speculative claim than one sometimes offered as a criticism of longtermism’s XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn’t, and the farthest reaches of the galaxy aren’t a less hostile environment for human survival than a post-nuclear earth.
And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable “friendly AI” might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility—and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....
if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility thatinterstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)
Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we’re not disregarding small possibilities there’s the possibility of a novel existential risk from provoking alien civilizations too...
I don’t endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered is more difficult than many longtermists predict.
Interstellar civilization operating on technology indistinguishable from magic
‘Indistinguishable from magic’ is a huge overbid. No-one’s talking about FTL travel. There ’s nothing in current physics that prevents us building generation ships given a large enough economy, and a number of options consistent with known physics for propelling them some of which have alreadybeendeveloped, others of which are tangible but not yet in reach, others of which get prettyoutlandish.
I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments
Pandemics seem likely to be relatively constant. Biological colonies will have strict atmospheric controls, and might evolve (naturally or artificially) to be too different from each other for a single virus to target them all even if it could spread. Nukes aren’t a threat across star systems unless they’re accelerated to relativistic speeds (and then the nuclear-ness is pretty much irrelevant).
the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero
I don’t know anyone who asserts this. Ord and other longtermists think it’s very low, though not because of bunkers or vaccination. I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
This is an absurd claim.
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...
That’s an interesting point. I’m a bit skeptical of modeling risk as constant per unit volume since almost all of the volume bordered by civilizations will be empty and not contributing to survival. I think a better model would just use the number of independent/disconnected planets colonized. I also expect colonies on other planets to be more precarious than civilization on Earth since the basic condition of most planets is that they are uninhabitable. That said, I do take the point that an interstellar civilization should be more resilient than a non-interstellar one (all else equal).
Yeah, I was somewhat lazily referring to planets and similar as ‘units’. I wrote a lot more about this here.
I don’t think precariousness would be that much of an issue by the time we have the technology to travel between stars. Humans can be bioformed, made digital, replaced by AGI shards, or just master their environments enough to do brute force terraforming.
Even if you do think they’re more precarious, over a long enough expansion period the difference is going to be eclipsed by the difference in colony-count.
This is a cool piece of work! I have one criticism, which is much the same as my criticism of Thorstad’s argument:
I think not believing this requires fairly speculative claims if a potential ‘end of time of perils’ we envisage is just human descendants spreading out across planets and then stars. Keeping current nonspeculative risks (eg nukes, pandemics, natural disasters) approximately constant per unit volume, the risk to all of human descendants would rapidly approach 0 as the volume we inhabited increased.
So for it to stay anywhere near constant, you need to posit that there’s some risk that is equally as capable of killing an interstellar civilisation as a single-planet one. This could be misaligned AGI, but AGI development isn’t constant—if there’s something that stops us from creating it in the next 1000 years, that something might be evidence that we’ll never create it. If we have created it by then, and it hasn’t killed us, then it seems likely that it never will.
So you need something else, like the possibility of triggering false vacuum decay, to imagine a ‘baseline risk’ scenario.
Positing an interstellar civilization seems to be exactly what Thorstad might call a “speculative claim” though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]
Certainly this doesn’t seem like a less speculative claim than one sometimes offered as a criticism of longtermism’s XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn’t, and the farthest reaches of the galaxy aren’t a less hostile environment for human survival than a post-nuclear earth.
And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable “friendly AI” might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility—and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....
if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility that interstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)
Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we’re not disregarding small possibilities there’s the possibility of a novel existential risk from provoking alien civilizations too...
I don’t endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered is more difficult than many longtermists predict.
‘Indistinguishable from magic’ is a huge overbid. No-one’s talking about FTL travel. There ’s nothing in current physics that prevents us building generation ships given a large enough economy, and a number of options consistent with known physics for propelling them some of which have already been developed, others of which are tangible but not yet in reach, others of which get pretty outlandish.
Pandemics seem likely to be relatively constant. Biological colonies will have strict atmospheric controls, and might evolve (naturally or artificially) to be too different from each other for a single virus to target them all even if it could spread. Nukes aren’t a threat across star systems unless they’re accelerated to relativistic speeds (and then the nuclear-ness is pretty much irrelevant).
I don’t know anyone who asserts this. Ord and other longtermists think it’s very low, though not because of bunkers or vaccination. I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
This is an absurd claim.
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...
That’s an interesting point. I’m a bit skeptical of modeling risk as constant per unit volume since almost all of the volume bordered by civilizations will be empty and not contributing to survival. I think a better model would just use the number of independent/disconnected planets colonized. I also expect colonies on other planets to be more precarious than civilization on Earth since the basic condition of most planets is that they are uninhabitable. That said, I do take the point that an interstellar civilization should be more resilient than a non-interstellar one (all else equal).
Yeah, I was somewhat lazily referring to planets and similar as ‘units’. I wrote a lot more about this here.
I don’t think precariousness would be that much of an issue by the time we have the technology to travel between stars. Humans can be bioformed, made digital, replaced by AGI shards, or just master their environments enough to do brute force terraforming.
Even if you do think they’re more precarious, over a long enough expansion period the difference is going to be eclipsed by the difference in colony-count.