Great post! I’ve never been convinced that the Precipice ends when we become multi-planetary. So I really enjoyed this summary and critique of Thorstad. And I might go even further and argue that not only does space settlement not mitigate existential risk, but it actually might make it worse.
I think it’s entirely possible that the more planets in our galaxy that we colonise, the higher the likelihood of the extinction of life in the universe will be. It breaks down like this:
Assumption 1: The powers of destruction will always be more powerful than the powers of construction or defence. i.e. at the limits of technology, there will be powers that a galactic civilisation would not be able to defend against if they were created. Even if the colonies do not communicate with others and remain isolated.
Examples:
Vacuum collapse (an expanding bubble of the true vacuum destroys everything in the universe).
unaligned superintelligence. I think an unaligned superintelligence would be able to destroy a galactic civilisation even if an aligned superintelligence was trying to protect it because of assumption 1. Especially if the superintelligence was aligned with destroying everything.
self-replicating robots. Spaceships that mine resources on a planet to replicate themselves, and then move on. This could quickly become
Space lasers. Lasers that travel at the speed of light through the vacuum of space. No planet would be able to see them coming so might not be able to defend against them. This favours a strike-first strategy—the only way to protect yourself is to destroy everyone else before they destroy you.
Assumption 2: For any of the above examples (only one of them has to be possible), it would only take one civilisation in the galaxy to create one of them (by accident or otherwise) and all life in the galaxy could be at risk.
Assumption 3: It would be extremely difficult to centrally govern all of these colonies and detect the development of these technologies as each colony will be lightyears apart. It would take potentially thousands of years to send and receive messages between the different colonies.
Assumption 3: The more colonies that exist in our galaxy, the higher the likelihood that one of those galaxy-ending inventions will be created.
So if the above is true, then I see 3 options:
We colonise the galaxy and all life in the universe becomes extinct due to the above argument. No long term future.
Once we start colonising exoplanets, there’s no stopping the wave of galactic colonisation. So we stay on Earth or within our own solar system until we can figure out a governance system that works to protect us against galactic civilisation destroying x-risks. This limits the importance of the long term future.
We colonise the galaxy with extreme surveillance of every colony through independently acting artificial intelligence systems that are capable of detecting and destroying any dangerous technologies. This sounds a lot like it could become an s-risk/ devolve into tyranny, but it might be the best option.
I would like to look into this further. If it’s true then longtermism is pretty much bust and we should focus on saving animals from factory farming instead… or solve the galaxy destroying problem… it would be nice to have a long pause to do that.
Thanks! It seems to me that we should be cautious about assuming that attackers will have the advantage. IR scholars have spent a lot of time examining the offence-defence balance in terrestrial military competition, and while there’s no consensus—even about whether a balance can be identified—I think it’s fair to say that most scholars who find the concept useful believe it tends to favour the defence. That seems particularly plausible when it’s a matter of projecting force at interstellar distances—though if space lasers are possible it could be a different matter (I’d like to know more about this, as I noted in my original post).
If, moreover, attack were possible, it might be with the aim not of destruction, but of conquest. If it succeeded, so long as it didn’t lead to outright extinction, this could still mean astronomical suffering. That is a problem with Thorstad’s argument which I’ll pick up in a subsequent post—it treats existential risks as synonymous with extinction ones.
Space lasers don’t seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that’s within the solar system they’re targeting, then that system will still have plenty of time to see the object that’s going to shoot them arriving. If they’re much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to conventional attack. Or you could just react to the huge lens by building a comparatively tiny mirror protecting the key targets in your system. Or you could build a Dyson swarm and not have any single target on which the rest of the settlement depends.
This guy estimates max effective range of lasers vs anything that can react (which, at a high enough tech level includes planets) at about one light second.
Self-replicating robots don’t seem like they have any particular advantage when used as a weapon over ones with more benign intent.
Great post! I’ve never been convinced that the Precipice ends when we become multi-planetary. So I really enjoyed this summary and critique of Thorstad. And I might go even further and argue that not only does space settlement not mitigate existential risk, but it actually might make it worse.
I think it’s entirely possible that the more planets in our galaxy that we colonise, the higher the likelihood of the extinction of life in the universe will be. It breaks down like this:
Assumption 1: The powers of destruction will always be more powerful than the powers of construction or defence. i.e. at the limits of technology, there will be powers that a galactic civilisation would not be able to defend against if they were created. Even if the colonies do not communicate with others and remain isolated.
Examples:
Vacuum collapse (an expanding bubble of the true vacuum destroys everything in the universe).
unaligned superintelligence. I think an unaligned superintelligence would be able to destroy a galactic civilisation even if an aligned superintelligence was trying to protect it because of assumption 1. Especially if the superintelligence was aligned with destroying everything.
self-replicating robots. Spaceships that mine resources on a planet to replicate themselves, and then move on. This could quickly become
Space lasers. Lasers that travel at the speed of light through the vacuum of space. No planet would be able to see them coming so might not be able to defend against them. This favours a strike-first strategy—the only way to protect yourself is to destroy everyone else before they destroy you.
Assumption 2: For any of the above examples (only one of them has to be possible), it would only take one civilisation in the galaxy to create one of them (by accident or otherwise) and all life in the galaxy could be at risk.
Assumption 3: It would be extremely difficult to centrally govern all of these colonies and detect the development of these technologies as each colony will be lightyears apart. It would take potentially thousands of years to send and receive messages between the different colonies.
Assumption 3: The more colonies that exist in our galaxy, the higher the likelihood that one of those galaxy-ending inventions will be created.
So if the above is true, then I see 3 options:
We colonise the galaxy and all life in the universe becomes extinct due to the above argument. No long term future.
Once we start colonising exoplanets, there’s no stopping the wave of galactic colonisation. So we stay on Earth or within our own solar system until we can figure out a governance system that works to protect us against galactic civilisation destroying x-risks. This limits the importance of the long term future.
We colonise the galaxy with extreme surveillance of every colony through independently acting artificial intelligence systems that are capable of detecting and destroying any dangerous technologies. This sounds a lot like it could become an s-risk/ devolve into tyranny, but it might be the best option.
I would like to look into this further. If it’s true then longtermism is pretty much bust and we should focus on saving animals from factory farming instead… or solve the galaxy destroying problem… it would be nice to have a long pause to do that.
Thanks! It seems to me that we should be cautious about assuming that attackers will have the advantage. IR scholars have spent a lot of time examining the offence-defence balance in terrestrial military competition, and while there’s no consensus—even about whether a balance can be identified—I think it’s fair to say that most scholars who find the concept useful believe it tends to favour the defence. That seems particularly plausible when it’s a matter of projecting force at interstellar distances—though if space lasers are possible it could be a different matter (I’d like to know more about this, as I noted in my original post).
If, moreover, attack were possible, it might be with the aim not of destruction, but of conquest. If it succeeded, so long as it didn’t lead to outright extinction, this could still mean astronomical suffering. That is a problem with Thorstad’s argument which I’ll pick up in a subsequent post—it treats existential risks as synonymous with extinction ones.
Space lasers don’t seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that’s within the solar system they’re targeting, then that system will still have plenty of time to see the object that’s going to shoot them arriving. If they’re much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to conventional attack. Or you could just react to the huge lens by building a comparatively tiny mirror protecting the key targets in your system. Or you could build a Dyson swarm and not have any single target on which the rest of the settlement depends.
This guy estimates max effective range of lasers vs anything that can react (which, at a high enough tech level includes planets) at about one light second.
Self-replicating robots don’t seem like they have any particular advantage when used as a weapon over ones with more benign intent.