Positing an interstellar civilization seems to be exactly what Thorstad might call a “speculative claim” though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]
Certainly this doesn’t seem like a less speculative claim than one sometimes offered as a criticism of longtermism’s XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn’t, and the farthest reaches of the galaxy aren’t a less hostile environment for human survival than a post-nuclear earth.
And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable “friendly AI” might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility—and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....
if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility thatinterstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)
Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we’re not disregarding small possibilities there’s the possibility of a novel existential risk from provoking alien civilizations too...
I don’t endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered is more difficult than many longtermists predict.
Interstellar civilization operating on technology indistinguishable from magic
‘Indistinguishable from magic’ is a huge overbid. No-one’s talking about FTL travel. There ’s nothing in current physics that prevents us building generation ships given a large enough economy, and a number of options consistent with known physics for propelling them some of which have alreadybeendeveloped, others of which are tangible but not yet in reach, others of which get prettyoutlandish.
I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments
Pandemics seem likely to be relatively constant. Biological colonies will have strict atmospheric controls, and might evolve (naturally or artificially) to be too different from each other for a single virus to target them all even if it could spread. Nukes aren’t a threat across star systems unless they’re accelerated to relativistic speeds (and then the nuclear-ness is pretty much irrelevant).
the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero
I don’t know anyone who asserts this. Ord and other longtermists think it’s very low, though not because of bunkers or vaccination. I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
This is an absurd claim.
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...
Positing an interstellar civilization seems to be exactly what Thorstad might call a “speculative claim” though. Interstellar civilization operating on technology indistinguishable from magic is an intriguing possibility with some decent arguments against (Fermi, lightspeed vs current human and technological lifespans) rather than something we should be sufficiently confident of to drop our credences in the possibility of humans becoming extinct down to zero in most years after the current time of perils,[1] and even if it were achieved I don’t see why nukes and pandemics and natural disaster risk should be approximately constant per planet or other relevant unit of volume for small groups of humans living in alien environments[2]
Certainly this doesn’t seem like a less speculative claim than one sometimes offered as a criticism of longtermism’s XR-focus: that the risk of human extinction (as opposed to significant near-term utility loss) from pandemics, nukes or natural disasters is already zero[3] because of things that already exist. Nuclear bunkers, isolation and vaccination and the general resilience of even unsophisticated lifeforms to natural disasters are congruent with our current scientific understanding in a way which faster than light travel isn’t, and the farthest reaches of the galaxy aren’t a less hostile environment for human survival than a post-nuclear earth.
And of course any AGI determined to destroy humans is unlikely to be less capable than relatively stupid, short-lived, oxygen-breathing lifeforms in space, so the AGI that destroys humans after they acquire interstellar capabilities is no more speculative than the AI that destroys humans next Tuesday. A persistent stable “friendly AI” might insulate humans from all these risks if sufficiently powerful (with or without space travel) as you suggest but that feels like an equally speculative possibility—and worse still one which many courses of action aimed at mitigating AI risk have a non-zero possibility of inadvertently working against....
if the baseline rate after the current time of peril is merely reduced a little by the nonzero possibility that interstellar travel could mitigate x-risk but remains nontrivial, the expected number of future humans alive still drops off sharply the further we go into the future (at least without countervailing assumptions about increased fecundity or longevity)
Individual human groups seem significantly less likely to survive a given generation the smaller they are and further they are from earth and the more they have to travel, to the point where the benefit against catastrophe of having humans living in other parts of the universe might be pretty short lived. If we’re not disregarding small possibilities there’s the possibility of a novel existential risk from provoking alien civilizations too...
I don’t endorse this claim FWIW, though I suspect that making humans extinct as opposed to severely endangered is more difficult than many longtermists predict.
‘Indistinguishable from magic’ is a huge overbid. No-one’s talking about FTL travel. There ’s nothing in current physics that prevents us building generation ships given a large enough economy, and a number of options consistent with known physics for propelling them some of which have already been developed, others of which are tangible but not yet in reach, others of which get pretty outlandish.
Pandemics seem likely to be relatively constant. Biological colonies will have strict atmospheric controls, and might evolve (naturally or artificially) to be too different from each other for a single virus to target them all even if it could spread. Nukes aren’t a threat across star systems unless they’re accelerated to relativistic speeds (and then the nuclear-ness is pretty much irrelevant).
I don’t know anyone who asserts this. Ord and other longtermists think it’s very low, though not because of bunkers or vaccination. I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
This is an absurd claim.
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...