The universe is already 13.8 billion years old. Assuming that our world is roughly representative for how long it takes for a civilization to spring up from a planet being formed (4.5 billion years), there has been about 9 billion years during which other more advanced civilizations could develop. Assuming it takes something like 100 million years to colonize an entire galaxy, one would already expect to see aliens having colonized the Milky Way, or initiated at least one of the existential risks that you describe. The fact that we are still here, is anthropic reasoning for either being alone in the galaxy somehow, that the existential risks are overblown, or, more likely, that there is already some kind of benign aliens in our neighbourhood who for whatever reason are leaving Earth alone (to our knowledge anyway), and probably protecting the galaxy from those existential risks.
(Note: I’m aware of the Grabby Aliens theory, but I still think it’s quite probable that even if we are early, we are much less likely to be the very first civilization out there.)
Keep in mind, the most advanced aliens are likely BILLIONS of years ahead of us in development. They’re likely unfathomably powerful. If we exist and they exist, they’re probably also wise and benevolent in ways we don’t understand (or else we wouldn’t be here living what seem like net positive lives). Maybe there exist strong game theoretic proofs that we don’t yet know for cooperation and benevolence that ensure that any rational civilization or superintelligence will have strong reasons to agree to cooperate at a distance and not initiate galaxy killing existential risks. Maybe those big voids between galaxies are where not so benign civilizations sprouted and galaxy killing existential risks occurred.
Though, it could also be that time travellers / simulators / some other sci-fi-ish entities “govern” the galaxy. Like, perhaps humans are the first civilization to develop time travel and so use their temporal supremacy to ensure the galaxy is ripe for human civilization alone, which could explain the Fermi Paradox?
All this, is of course, wild speculation. These kinds of conjectures are very hard to ground in anything else.
Anyways, I also found your post very interesting, but I’m not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?
Yeah, for “the way forward” section I explicitly assume that alien civilisations have not already developed. This might be wrong, I don’t know. One possible argument in line with my reasoning around galactic x-risks is that aliens don’t exist because of the Anthropic principle—if they had already emerged then we would have been killed a long time ago, so if we exist then it’s impossible for aliens civilisations to have emerged already. No alien civilisations exist for the same reason that the fine structure constant allows biochemistry.
I’m not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?
I totally agree with this statement. I have huge uncertainty about what awaits us in the long-term future (in the post I compared myself to an Ancient Roman trying to predict AI alignment risks). But it seems that, since the universe is currently conducive to life, the unknowns may be more likely to end us than help us. So the main practical suggestion I have is that we take things one step at a time and hold off on interstellar travel (which could plausibly occur in the next few decades) until we know more about galactic x-risks and galactic governance.
It’s not necessarily that these galactic-scale considerations will happen soon or are tractable, but that we might begin a series of events (i.e., the creation of self-propagating spacefaring civilisation) that interferes with the best possible strategy for avoiding them in the long-term future. I don’t claim to know what that solution is, but I suggest some requirements a governance system may have to meet.
I’m not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist (i.e. grabby aliens), but I think there’s room in the universe for many possible kinds of alien civilizations, and so if we allow that some but not all aliens are hostile expansionists, then there might be pockets of the universe where an advanced alien civilization quietly stewards their region. You could call them the “Gardeners”. It’s possible that even if we can’t exist in a region with Grabby Aliens, we could still either exist in an empty region with no aliens, or a region with Gardeners.
Also, realistically, if you assume that the reach of an alien civilization spreads at the speed of light, but the effective expansion rate is much slower due to not needing the space until it’s already filled up with population and megastructures, it’s very possible that we might be within the reach of advanced aliens who just haven’t expanded that far yet. Naturally occurring life might be rare enough that they might see value in not destroying or colonizing such planets, say, seeing us as a scientifically valuable natural experiment, like the Galapagos were to Darwin.
So, I think there’s reasons why advanced aliens aren’t necessarily mutually exclusive with our survival, as the Anthropic Principle would require.
Given, I don’t know which of empty space or Gardeners or late expanders is more likely, and would hesitate to assign probabilities to them.
I’m not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist
I’d be interested to hear why you think this. I think that based on the reasoning in my post, all it takes is one alien civilization to emerge that would initiate a galactic x-risk, maybe because they accidentally create astronomical suffering and want to end it, they are hostile, would prefer different physics for some reason, or are just irresponsible.
Most of the galactic x-risks should be limited by the speed of light (because causality is limited by the speed of light), and would, if initiated, probably expand like a bubble from their source, again, propagating outward at the speed of light. Thus, assuming a reasonably random distribution of alien civilizations, there should regions of the universe that are currently unaffected by that one or more alien civilizations causing a galactic x-risk to occur. We are most probably in such a region, otherwise we would not exist. So, yes, the Anthropic Principle applies in the sense that we eliminate a possibility (x-risk causing aliens nearby), but we don’t eliminate all the other possibilities (alone in the region or non-x-risk causing aliens nearby), which is what I mean. I should have explained that better.
Also, the reality is that our long-term future is limited by the eventual heat death of the universe anyway (we will eventually run out of usable energy), so there is no way for our civilization to last forever (short of some hypothetical time travel shenanigans). We can at best delay the inevitable, and maximize the flourishing that occurs over spacetime.
The universe is already 13.8 billion years old. Assuming that our world is roughly representative for how long it takes for a civilization to spring up from a planet being formed (4.5 billion years), there has been about 9 billion years during which other more advanced civilizations could develop. Assuming it takes something like 100 million years to colonize an entire galaxy, one would already expect to see aliens having colonized the Milky Way, or initiated at least one of the existential risks that you describe. The fact that we are still here, is anthropic reasoning for either being alone in the galaxy somehow, that the existential risks are overblown, or, more likely, that there is already some kind of benign aliens in our neighbourhood who for whatever reason are leaving Earth alone (to our knowledge anyway), and probably protecting the galaxy from those existential risks.
(Note: I’m aware of the Grabby Aliens theory, but I still think it’s quite probable that even if we are early, we are much less likely to be the very first civilization out there.)
Keep in mind, the most advanced aliens are likely BILLIONS of years ahead of us in development. They’re likely unfathomably powerful. If we exist and they exist, they’re probably also wise and benevolent in ways we don’t understand (or else we wouldn’t be here living what seem like net positive lives). Maybe there exist strong game theoretic proofs that we don’t yet know for cooperation and benevolence that ensure that any rational civilization or superintelligence will have strong reasons to agree to cooperate at a distance and not initiate galaxy killing existential risks. Maybe those big voids between galaxies are where not so benign civilizations sprouted and galaxy killing existential risks occurred.
Though, it could also be that time travellers / simulators / some other sci-fi-ish entities “govern” the galaxy. Like, perhaps humans are the first civilization to develop time travel and so use their temporal supremacy to ensure the galaxy is ripe for human civilization alone, which could explain the Fermi Paradox?
All this, is of course, wild speculation. These kinds of conjectures are very hard to ground in anything else.
Anyways, I also found your post very interesting, but I’m not sure if any of these galactic level existential risks are tractable in any meaningful way at our current level of development. Maybe we should take things one step at a time?
Hi Joseph, glad you found the post interesting :)
Yeah, for “the way forward” section I explicitly assume that alien civilisations have not already developed. This might be wrong, I don’t know. One possible argument in line with my reasoning around galactic x-risks is that aliens don’t exist because of the Anthropic principle—if they had already emerged then we would have been killed a long time ago, so if we exist then it’s impossible for aliens civilisations to have emerged already. No alien civilisations exist for the same reason that the fine structure constant allows biochemistry.
I totally agree with this statement. I have huge uncertainty about what awaits us in the long-term future (in the post I compared myself to an Ancient Roman trying to predict AI alignment risks). But it seems that, since the universe is currently conducive to life, the unknowns may be more likely to end us than help us. So the main practical suggestion I have is that we take things one step at a time and hold off on interstellar travel (which could plausibly occur in the next few decades) until we know more about galactic x-risks and galactic governance.
It’s not necessarily that these galactic-scale considerations will happen soon or are tractable, but that we might begin a series of events (i.e., the creation of self-propagating spacefaring civilisation) that interferes with the best possible strategy for avoiding them in the long-term future. I don’t claim to know what that solution is, but I suggest some requirements a governance system may have to meet.
I’m not sure I agree that the Anthropic Principle applies here. It would if ALL alien civilizations are guaranteed to be hostile and expansionist (i.e. grabby aliens), but I think there’s room in the universe for many possible kinds of alien civilizations, and so if we allow that some but not all aliens are hostile expansionists, then there might be pockets of the universe where an advanced alien civilization quietly stewards their region. You could call them the “Gardeners”. It’s possible that even if we can’t exist in a region with Grabby Aliens, we could still either exist in an empty region with no aliens, or a region with Gardeners.
Also, realistically, if you assume that the reach of an alien civilization spreads at the speed of light, but the effective expansion rate is much slower due to not needing the space until it’s already filled up with population and megastructures, it’s very possible that we might be within the reach of advanced aliens who just haven’t expanded that far yet. Naturally occurring life might be rare enough that they might see value in not destroying or colonizing such planets, say, seeing us as a scientifically valuable natural experiment, like the Galapagos were to Darwin.
So, I think there’s reasons why advanced aliens aren’t necessarily mutually exclusive with our survival, as the Anthropic Principle would require.
Given, I don’t know which of empty space or Gardeners or late expanders is more likely, and would hesitate to assign probabilities to them.
I’d be interested to hear why you think this. I think that based on the reasoning in my post, all it takes is one alien civilization to emerge that would initiate a galactic x-risk, maybe because they accidentally create astronomical suffering and want to end it, they are hostile, would prefer different physics for some reason, or are just irresponsible.
Most of the galactic x-risks should be limited by the speed of light (because causality is limited by the speed of light), and would, if initiated, probably expand like a bubble from their source, again, propagating outward at the speed of light. Thus, assuming a reasonably random distribution of alien civilizations, there should regions of the universe that are currently unaffected by that one or more alien civilizations causing a galactic x-risk to occur. We are most probably in such a region, otherwise we would not exist. So, yes, the Anthropic Principle applies in the sense that we eliminate a possibility (x-risk causing aliens nearby), but we don’t eliminate all the other possibilities (alone in the region or non-x-risk causing aliens nearby), which is what I mean. I should have explained that better.
Also, the reality is that our long-term future is limited by the eventual heat death of the universe anyway (we will eventually run out of usable energy), so there is no way for our civilization to last forever (short of some hypothetical time travel shenanigans). We can at best delay the inevitable, and maximize the flourishing that occurs over spacetime.