Interesting! I’m glad to see engagement with Thorstadt’s work, this area is one I found myself less convinced about.
Interstellar colonisation is insanely difficult and resource intensive, so I expect any widespread dispersal of humanity beyond our solar system to be extremely far off in the future. If you think that that existential risk is high, there may be an extremely small chance we survive to that point.
I’m also not sure about your point on “misaligned AI’s”. Firstly, this should be “extinctionist AI’s” or something, as it seems very unlikely that all misaligned AI’s would actively want to hunt down tiny remnants of humanity. But if they were out to kill us, why would they need a reciever? It’s far easier to send an automated killer probe long distances than to send a human colony, so it seems they’d be able to hunt down colonies physically if they need to.
Thanks! So far as I know, you’re right about interstellar travel. But suppose we got a good bit of dispersal within the solar system, say, ten settlements. There seems a reasonable chance that at least some would deliberately break off communication with the rest of the solar system and develop effective means of policing this. They would then—so far as I can tell—be immune to existential risks transmitted by information—e.g., misaligned AI.
It’s true that they could still be vulnerable to physical attack, such as a killer probe, but how likely is this? It’s conceivable that either human actors or misaligned ASI could decide to wipe out or conquer hermit settlements elsewhere in the solar system, but that strikes me as rather improbable. They’d have to have a strange set of motives.
It might also be hard to do. Since the aggressor would have to project power across a huge distance, so long as the potential victims had means of detecting a probe or some other attack, we might expect the offence-defence balance to favour the defence. (This wouldn’t be true, however, if the reason the settlements had ‘gone off the grid’ was that they had returned to pre-modern conditions, either by choice or by catastrophe.)
I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don’t think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far.
I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front.
Interesting! I’m glad to see engagement with Thorstadt’s work, this area is one I found myself less convinced about.
Interstellar colonisation is insanely difficult and resource intensive, so I expect any widespread dispersal of humanity beyond our solar system to be extremely far off in the future. If you think that that existential risk is high, there may be an extremely small chance we survive to that point.
I’m also not sure about your point on “misaligned AI’s”. Firstly, this should be “extinctionist AI’s” or something, as it seems very unlikely that all misaligned AI’s would actively want to hunt down tiny remnants of humanity. But if they were out to kill us, why would they need a reciever? It’s far easier to send an automated killer probe long distances than to send a human colony, so it seems they’d be able to hunt down colonies physically if they need to.
Thanks! So far as I know, you’re right about interstellar travel. But suppose we got a good bit of dispersal within the solar system, say, ten settlements. There seems a reasonable chance that at least some would deliberately break off communication with the rest of the solar system and develop effective means of policing this. They would then—so far as I can tell—be immune to existential risks transmitted by information—e.g., misaligned AI.
It’s true that they could still be vulnerable to physical attack, such as a killer probe, but how likely is this? It’s conceivable that either human actors or misaligned ASI could decide to wipe out or conquer hermit settlements elsewhere in the solar system, but that strikes me as rather improbable. They’d have to have a strange set of motives.
It might also be hard to do. Since the aggressor would have to project power across a huge distance, so long as the potential victims had means of detecting a probe or some other attack, we might expect the offence-defence balance to favour the defence. (This wouldn’t be true, however, if the reason the settlements had ‘gone off the grid’ was that they had returned to pre-modern conditions, either by choice or by catastrophe.)
I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don’t think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far.
I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front.