A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don’t actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn’t involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I agree that if there’s an X-risk that isn’t defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it’s much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there’s very little time for rogue spacefarers to get outside of the parent civilization’s control.
I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.
Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I’m not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can’t imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there’s also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.
I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn’t involve us being very wrong about physics.
This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.
The better title is interstellar travel poses unacknowledged tail risks.
I agree that if there’s an X-risk that isn’t defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.
And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it’s much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.
And AI progress is likely to be fast enough such that there’s very little time for rogue spacefarers to get outside of the parent civilization’s control.
The dissolving the fermi paradox paper is here:
https://arxiv.org/abs/1806.02404