I’m not that deep into AI safety myself, so keep that in mind. But that being said, I haven’t heard that thought before and basically agree with the idea of “if we fall victim to AI, we should at least do our best to ensure it doesn’t end all life in the universe” (which is basically how I took it—correct me if that’s a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:
probability of AI managing to spread through the universe (I’d intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
probability of aliens having values sufficiently similar to ours’
I guess there’s also a side to the Fermi paradox that’s relevant here—it’s not only that we don’t see alien civilizations out there, we also don’t see any signs of an ASI colonizing space. And while there may be many explanations for that, we’re still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.
In the end I don’t really have any conclusive thoughts (yet). I’d be surprised though if this consideration were a surprise to Nick Bostrom.
I’m not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.
As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilization can access all surrounding 100 galaxies as well as the 100 galaxies of neighboring civilizations.
If rogue AI takes over the world, then it would probably also be able to take over the other hundred galaxies. Colonizing some galaxies sounds feasible for an agent that can single-handedly take over the world. If the rogue AI did take over the galaxies, then I’m guessing they would be converted into paperclips or something of the like and thus have approximately zero value to us. The AI would be unlikely to let any neighboring alien civilizations do anything we would value with the 100 galaxies.
Suppose instead there is an existential catastrophe due to a nanotechnology or biotechnology disaster. Then even if intelligent life never re-evolved on Earth, a neighboring alien civilization may be able to colonize those 100 galaxies and do something we would value with them.
Thus, for my reasoning to be relevant I don’t think the first two ifs you listed are essential.
As for the third if, it is quite the conjunction that there isn’t a single other alien civilization in the Universe and thus is unlikely. However, if the density of alien civilizations or future alien civilizations is so low that we will never be in the Hubble sphere of any of them, then this would make my reasoning less relevant.
I’m not that deep into AI safety myself, so keep that in mind. But that being said, I haven’t heard that thought before and basically agree with the idea of “if we fall victim to AI, we should at least do our best to ensure it doesn’t end all life in the universe” (which is basically how I took it—correct me if that’s a bad summary). There certainly are a few ifs involved though, and the outlined scenario may very well be unlikely:
probability of AI managing to spread through the universe (I’d intuitively assume that from the set of possible AIs ending human civilization the subset of AIs also conquering space is notably smaller; I may certainly be wrong here, but it may be something to take into account)
probability of such an AI spreading far enough and in a way as to be able to effectively prevent the emergence of what would otherwise become a space colonizing alien civilization
probability of alien civilizations existing and ultimately colonizing space in the first place (or developing the potential and would live up to it if it were not for our ASI preventing it)
probability of aliens having values sufficiently similar to ours’
I guess there’s also a side to the Fermi paradox that’s relevant here—it’s not only that we don’t see alien civilizations out there, we also don’t see any signs of an ASI colonizing space. And while there may be many explanations for that, we’re still here, and seemingly on the brink of becoming/creating just the kind of thing an ASI would instrumentally like to prevent, which is at least some evidence that such an ASI does not yet exist in our proximity, which again is minor evidence that we might not create such an ASI either.
In the end I don’t really have any conclusive thoughts (yet). I’d be surprised though if this consideration were a surprise to Nick Bostrom.
I’m not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.
As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilization can access all surrounding 100 galaxies as well as the 100 galaxies of neighboring civilizations.
If rogue AI takes over the world, then it would probably also be able to take over the other hundred galaxies. Colonizing some galaxies sounds feasible for an agent that can single-handedly take over the world. If the rogue AI did take over the galaxies, then I’m guessing they would be converted into paperclips or something of the like and thus have approximately zero value to us. The AI would be unlikely to let any neighboring alien civilizations do anything we would value with the 100 galaxies.
Suppose instead there is an existential catastrophe due to a nanotechnology or biotechnology disaster. Then even if intelligent life never re-evolved on Earth, a neighboring alien civilization may be able to colonize those 100 galaxies and do something we would value with them.
Thus, for my reasoning to be relevant I don’t think the first two ifs you listed are essential.
As for the third if, it is quite the conjunction that there isn’t a single other alien civilization in the Universe and thus is unlikely. However, if the density of alien civilizations or future alien civilizations is so low that we will never be in the Hubble sphere of any of them, then this would make my reasoning less relevant.
Thoughts?