I think this post is underrated (karma-wise) and I’m curating it. I love how thorough this is, and the focus on under-theorised problems. I don’t think there are many other places like this where we could have a serious conversation about these risks.
I’d like to see more critical engagement on the key takeaways (helpfully listed at the end). As a start, here’s a poll for the key claim Jordan identifies in the title:
I think it will probably not doom the long-term future.
This is partly because I’m pretty optimistic that, if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that. (E.g. having AI monitors travel with people and force them not to do stuff, as Buck mentions in the comments.) Importantly, I think interstellar colonization is difficult/slow enough that we’ll probably first get very smart AIs with plenty of time to figure out good solutions. (If we solve alignment.)
But I also think it’s less likely that things would go badly even without coordination. Going through the items in the list:
Galactic x-risk
Is it possible?
Would it end Galactic civ?
Lukas’ take
Self-replicating machines
100% | ✅
75% | ❌
I doubt this would end galactic civ. The quote in that section is about killing low-tech civs before they’ve gotten high-tech. A high-tech civ could probably monitor for and destroy offensive tech built by self-replicators before it got bad enough that it could destroy the civ.
”50%” in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it’s possible to induce. This might still be a big deal though!
This seems like an incredibly broad category. I’m quite concerned about something in this general vicinity, but it doesn’t seem to share the property of the other things in the list where “if it’s started anywhere, then it spreads and destroys everything everywhere”. Or at least you’d have to narrow the category a lot before you got there.
Artificial superintelligence
100% | ✅
80% | ❌
The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it’s defense-dominant.
Conflict with alien intelligence
75% | ❌
90% | ❌
The argument given in this subsection is that technology might be offense-dominant. But my best guess is that it’s defense-dominant.
Expanding on the question about whether space warfare is offense-dominant or defense-dominant: One argument I’ve heard for defense-dominance is that, in order to destroy very distant stuff, you need to concentrate a lot of energy into a very tiny amount of space. (E.g. very narrowly focused lasers, or fast-moving rocks flinged precisely.) But then you can defeat that by jiggling around the stuff that you want to protect in unpredictable ways, so that people can’t aim their highly-concentrated energy from far away and have it hit correctly.
Now that’s just one argument, so I’m not very confident. But I’m at <50% on offense-dominance.
(A lot of the other items on the list could also be stories for how you get offense-dominace, where I’m especially concerned about vacuum decay. But it would be double-counting to put those both in their own categories and to count them as valid attacks from superintelligence/aliens.)
Thanks for the in-depth comment. I agree with most of it.
if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that.
Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we’d have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.
“50%” in the survey was about vacuum decay being possible in principle, not about it being possible to technologically induce (at the limit of technology). The survey reported significantly lower probability that it’s possible to induce. This might still be a big deal though!
Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that’s permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).
anything that’s permitted by the laws of physics is possible to induce with arbitrarily advanced technology
Hm, this doesn’t seem right to me. For example, I think we could coherently talk about and make predictions about what would happen if there was a black hole with a mass of 10^100 kg. But my best guess is that we can’t construct such a black hole even at technological maturity, because even the observable universe only has 10^53 kg in it.
Similarly, we can coherently talk about and make predictions about what would happen if certain kinds of lower-energy states existed. (Such as predicting that they’d be meta-stable and spread throughout the universe.) But that doesn’t necessarily mean that we can move the universe to such a state.
Interstellar travel will probably doom the long-term future
Seems false, probably people will just sort out some strategy for enforcing laws (e.g. having AI monitors travel with people and force them not to do stuff).
Interstellar travel will probably doom the long-term future
Some quick thoughts: By the time we’ve colonized numerous planets and cumulative galactic x-risks are starting to seriously add up, I expect there to be von Neumann probes traveling at a significant fraction of the speed of light (c) in many directions. Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero. In such a scenario most value of our future lightcone could still be extinguished, but not all.
A very long-term consideration is that as the expansion of the universe accelerates so does the number of causally isolated islands. For example, in 100-150 billion years the Local Group will be causally isolated from the rest of the universe, protecting it from galactic x-risks happening elsewhere.
I guess this trades off with your 6th conclusion (Interstellar travel should be banned until galactic x-risks and galactic governance are solved). Getting governance right before we can build von Neumann probes at >0.5c is obviously great, but once we can build them it’s a lot less clear if waiting is good or bad.
Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero.
This isn’t right. Near-speed-of-light movement in opposite directions doesn’t add up to above speed of light relative movement. e.g., Two probes each moving away from a common starting point at 0.7c have a speed relative to each other of about 0.94c, not 1.4c, so they stay in each other’s lightcone.
(That’s standard special relativity. I asked o3 how that changes with cosmic expansion and it claims that, given our current understanding of cosmic expansion, they will leave each other’s lightcone after about 20 billion years.)
Right, so even with near-c von Neumann probes in all directions, vacuum collapse or some other galactic x-risk moving at c would only allow civilization to survive as a thin spherical shell of space on a perpetually migrating wave front around the extinction zone that would quickly eat up the center of the colonized volume.
Such a civilization could still contain many planets and stars if they can get a decent head start before a galactic x-risk occurs + travel at near c without getting slowed down much by having to make stops to produce and accelerate more von Neumann probes. Yeah, that’s a lot of if’s.
20 billion ly estimate seems accurate, so cosmic expansion only protects against galactic x-risks on very long timescales. And without very robust governance it’s doubtful we might not get to that point.
Awesome speculations. We’re faced with such huge uncertainty and huge stakes. I can try and make a conclusion based on scenarios and probabilities, but I think the simplest argument for not spreading throughout the universe is that we have no idea what we’re doing.
This might even apply to spreading throughout the Solar System too. If I’m recalling correctly, Daniel Deudney argued that a self-sustaining colony on Mars is the point of no return for space expansion as it would culturally diverge from Earth and their actions would be out of our control.
I’m admittedly confused by this. I suppose when you wrote
… none of these have two ticks in my estimation. However, combined, I think this list represents a threat that is extremely likely to be real and capable of ending a galactic civilisation.
you meant that, combined, they nudge your needle 10%?
I’m unsure how to interpret “will probably doom”. 2 possible readings:
A highly technologically advanced civilization that tries to get really big will probably wind up wiping itself out due to the dynamics in this post. More than half of all highly technologically advanced civilizations that grow really big go extinct due to drastically increasing their attack surface to existential threats.
The following claim is probably true: a highly technologically advanced civilization that tries to get really big will almost certainly wind up wiping itself out due to the dynamics in this post. Almost every very large, highly technologically advanced civilization that grows big has a doom-level event spawn in a pocket of the civilization and spread to the rest of it.
The 2nd reading is big if true—it implies that the EV of the future arising from our civilization is much lower than it would otherwise seem, and that civilizations might be better off staying small—but I disagree with it.
For the first one I’m on the agree side, but it doesn’t change the overall story very much.
Interstellar travel will probably doom the long-term future
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don’t actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
The change the universe constants is an example.
Also, in most modern theories of time travel, you only get self-consistent outcomes, and a lot of the classic portrayals of using time travel to destroy the universe through paradoxical inputs wouldn’t work, because only self-consistent outcomes are allowed, and would almost certainly be prevented beforehand.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
For those unaware of acausal trade, it’s basically replacing direct communication for predicting what each other wants, and if you have the ability to do vast amounts of simulations, you can get very, very good predictive models of what the other wants such that both of you can trade without requiring any communication, which is necessary for realistic galactic empires/singletons to exist:
I don’t have much of an opinion on the question, but if it’s true that acausal trade can basically substitute wholly for communication that is traditionally necessary to suppress rebellions in empires, then most galactic/universe scale risks are pretty easily avoidable because we don’t have to roll the dice on every civilization trying to do it’s own research that may lead to x-risk.
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don’t actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn’t involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I agree that if there’s an X-risk that isn’t defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it’s much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there’s very little time for rogue spacefarers to get outside of the parent civilization’s control.
I feel like many of these risks could go either way as annihilation or immortality. For example, changing fundamental physics or triggering vacuum decay could unlock infinite energy, which could lead to an infinitely prosperous (and protected) civilization.
Essentially, just as there are galactic existential risks, there are galactic existential security events. One potential idea would be extracting dark energy from space to self-replicate in the intergalactic void to continually expanding forever.
Interstellar travel will probably doom the long-term future
My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.
I think this post is underrated (karma-wise) and I’m curating it. I love how thorough this is, and the focus on under-theorised problems. I don’t think there are many other places like this where we could have a serious conversation about these risks.
I’d like to see more critical engagement on the key takeaways (helpfully listed at the end). As a start, here’s a poll for the key claim Jordan identifies in the title:
I think it will probably not doom the long-term future.
This is partly because I’m pretty optimistic that, if interstellar colonization would predictably doom the long-term future, then people would figure out solutions to that. (E.g. having AI monitors travel with people and force them not to do stuff, as Buck mentions in the comments.) Importantly, I think interstellar colonization is difficult/slow enough that we’ll probably first get very smart AIs with plenty of time to figure out good solutions. (If we solve alignment.)
But I also think it’s less likely that things would go badly even without coordination. Going through the items in the list:
Expanding on the question about whether space warfare is offense-dominant or defense-dominant: One argument I’ve heard for defense-dominance is that, in order to destroy very distant stuff, you need to concentrate a lot of energy into a very tiny amount of space. (E.g. very narrowly focused lasers, or fast-moving rocks flinged precisely.) But then you can defeat that by jiggling around the stuff that you want to protect in unpredictable ways, so that people can’t aim their highly-concentrated energy from far away and have it hit correctly.
Now that’s just one argument, so I’m not very confident. But I’m at <50% on offense-dominance.
(A lot of the other items on the list could also be stories for how you get offense-dominace, where I’m especially concerned about vacuum decay. But it would be double-counting to put those both in their own categories and to count them as valid attacks from superintelligence/aliens.)
Thanks for the in-depth comment. I agree with most of it.
Agreed, I hope this is the case. I think there are some futures where we send lots of ships out to interstellar space for some reason or act too hastily (maybe a scenario where transformative AI speeds up technological development, but not so much our wisdom). Just one mission (or set of missions) capable of self-propagating to other star systems almost inevitably leads to galactic civilisation in the end, and we’d have to catch up to it to ensure existential security, which would become challenging if they create von-Neumann probes.
Yeah this is my personal estimate based on that survey and its responses. I was particularly convinced by one responder who put 100% probability that its possible to induce (conditional on the vacuum being metastable), as anything that’s permitted by the laws of physics is possible to induce with arbitrarily advanced technology (so, 50% based on that chance of the vacuum is metastable).
Hm, this doesn’t seem right to me. For example, I think we could coherently talk about and make predictions about what would happen if there was a black hole with a mass of 10^100 kg. But my best guess is that we can’t construct such a black hole even at technological maturity, because even the observable universe only has 10^53 kg in it.
Similarly, we can coherently talk about and make predictions about what would happen if certain kinds of lower-energy states existed. (Such as predicting that they’d be meta-stable and spread throughout the universe.) But that doesn’t necessarily mean that we can move the universe to such a state.
Seems false, probably people will just sort out some strategy for enforcing laws (e.g. having AI monitors travel with people and force them not to do stuff).
Some quick thoughts:
By the time we’ve colonized numerous planets and cumulative galactic x-risks are starting to seriously add up, I expect there to be von Neumann probes traveling at a significant fraction of the speed of light (c) in many directions. Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero. In such a scenariomostvalue of our future lightcone could still be extinguished, but notall.A very long-term consideration is that as the expansion of the universe accelerates so does the number of causally isolated islands. For example, in 100-150 billion years the Local Group will be causally isolated from the rest of the universe, protecting it from galactic x-risks happening elsewhere.
I guess this trades off with your 6th conclusion (Interstellar travel should be banned until galactic x-risks and galactic governance are solved). Getting governance right before we can build von Neumann probes at >0.5c is obviously great, but once we can build them it’s a lot less clear if waiting is good or bad.
Thinking out loud, if any of this seems off lmk!
This isn’t right. Near-speed-of-light movement in opposite directions doesn’t add up to above speed of light relative movement. e.g., Two probes each moving away from a common starting point at 0.7c have a speed relative to each other of about 0.94c, not 1.4c, so they stay in each other’s lightcone.
(That’s standard special relativity. I asked o3 how that changes with cosmic expansion and it claims that, given our current understanding of cosmic expansion, they will leave each other’s lightcone after about 20 billion years.)
Right, so even with near-c von Neumann probes in all directions, vacuum collapse or some other galactic x-risk moving at c would only allow civilization to survive as a thin spherical shell of space on a perpetually migrating wave front around the extinction zone that would quickly eat up the center of the colonized volume.
Such a civilization could still contain many planets and stars if they can get a decent head start before a galactic x-risk occurs + travel at near c without getting slowed down much by having to make stops to produce and accelerate more von Neumann probes. Yeah, that’s a lot of if’s.
20 billion ly estimate seems accurate, so cosmic expansion only protects against galactic x-risks on very long timescales. And without very robust governance it’s doubtful we might not get to that point.
Awesome speculations. We’re faced with such huge uncertainty and huge stakes. I can try and make a conclusion based on scenarios and probabilities, but I think the simplest argument for not spreading throughout the universe is that we have no idea what we’re doing.
This might even apply to spreading throughout the Solar System too. If I’m recalling correctly, Daniel Deudney argued that a self-sustaining colony on Mars is the point of no return for space expansion as it would culturally diverge from Earth and their actions would be out of our control.
By “probably” in the title, apparently I mean just over 50% chance ;)
I’m admittedly confused by this. I suppose when you wrote
you meant that, combined, they nudge your needle 10%?
I’m unsure how to interpret “will probably doom”. 2 possible readings:
A highly technologically advanced civilization that tries to get really big will probably wind up wiping itself out due to the dynamics in this post. More than half of all highly technologically advanced civilizations that grow really big go extinct due to drastically increasing their attack surface to existential threats.
The following claim is probably true: a highly technologically advanced civilization that tries to get really big will almost certainly wind up wiping itself out due to the dynamics in this post. Almost every very large, highly technologically advanced civilization that grows big has a doom-level event spawn in a pocket of the civilization and spread to the rest of it.
The 2nd reading is big if true—it implies that the EV of the future arising from our civilization is much lower than it would otherwise seem, and that civilizations might be better off staying small—but I disagree with it.
For the first one I’m on the agree side, but it doesn’t change the overall story very much.
A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don’t actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
The change the universe constants is an example.
Also, in most modern theories of time travel, you only get self-consistent outcomes, and a lot of the classic portrayals of using time travel to destroy the universe through paradoxical inputs wouldn’t work, because only self-consistent outcomes are allowed, and would almost certainly be prevented beforehand.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
For those unaware of acausal trade, it’s basically replacing direct communication for predicting what each other wants, and if you have the ability to do vast amounts of simulations, you can get very, very good predictive models of what the other wants such that both of you can trade without requiring any communication, which is necessary for realistic galactic empires/singletons to exist:
https://www.lesswrong.com/w/acausal-trade
I don’t have much of an opinion on the question, but if it’s true that acausal trade can basically substitute wholly for communication that is traditionally necessary to suppress rebellions in empires, then most galactic/universe scale risks are pretty easily avoidable because we don’t have to roll the dice on every civilization trying to do it’s own research that may lead to x-risk.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn’t involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
I agree that if there’s an X-risk that isn’t defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it’s much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there’s very little time for rogue spacefarers to get outside of the parent civilization’s control.
The dissolving the fermi paradox paper is here:
https://arxiv.org/abs/1806.02404
I feel like many of these risks could go either way as annihilation or immortality. For example, changing fundamental physics or triggering vacuum decay could unlock infinite energy, which could lead to an infinitely prosperous (and protected) civilization.
Essentially, just as there are galactic existential risks, there are galactic existential security events. One potential idea would be extracting dark energy from space to self-replicate in the intergalactic void to continually expanding forever.
My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.