The validity of this hypothesis can be studied using models estimating the frequency of Space-Faring Civilizations (SFCs) in the universe (Sandberg 2018, Finnveden 2019, Olson 2020, Hanson 2021, Snyder-Beattie 2021, Cook 2022). The validity will also depend on which decision theory we use and on our beliefs behind these
I’m very speculative about making moral decisions concerning the donations of potentially millions of dollars based on something so speculative. I think it’s too far down the EA crazy train to prioritise different causes based on the density of alien civilisations. It’s probably more speculative than the simulation hypothesis (which, if true, significantly increases the likelihood that you are the only sentient being in this universe), but we don’t make moral decisions based on that.
I get that there’s been a lot of work on this and that we can make progress on it (I know, I’m an astrobiologist), but I’m sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren’t there yet. The universe is so enormous and bonkers and our brains are so small—we can make numerical estimates sure, but creating a number doesn’t necessarily mean we have more certainty.
How much counterfactual value Humanity creates then depends entirely on the utility Humanity’s spacefaring civilisation creates relative to all spacefaring civilisations.
I’ve got a big moral circle (all sentient beings and their descendants), but it does not extend to aliens because of cluelessness.
I think you’re posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don’t know. But it’s possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we’re the only conscious life, and I think it’s very important that we act like it until proven otherwise.
Very interesting post though! Lots to think about and I can see that this could be the most important moral consideration… maybe… I look forward to your series and I definitely think it’s worthwhile to try and figure out what that consideration might be.
I’ve got a big moral circle (all sentient beings and their descendants), but it does not extend to aliens because of cluelessness.
...
I’m quite confident that if we’re thinking about the moral utility of spacefaring civilisation, we should at least limit our scope to our own civilisation
I agree that the particular guesses we make about aliens will be very speculative/arbitrary. But “we shouldn’t take the action recommended by our precise ‘best guess’ about XYZ” does not imply “we can set the expected contribution of XYZ to the value of our interventions to 0″. I think if you buy cluelessness — in particular, the indeterminate beliefs framing on cluelessness — the lesson you should take from Maxime’s post is that we simply aren’t justified in saying any intervention with effects on x-risk is net-positive or net-negative (w.r.t. total welfare of sentient beings).
I somewhat agree with your points. Here are some contributions, and pushbacks:
I get that there’s been a lot of work on this and that we can make progress on it (I know, I’m an astrobiologist), but I’m sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren’t there yet. The universe is so enormous and bonkers and our brains are so small—we can make numerical estimates sure, but creating a number doesn’t necessarily mean we have more certainty.
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
I think you’re posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don’t know. But it’s possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we’re the only conscious life, and I think it’s very important that we act like it until proven otherwise.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
If human consciousness is a random fluke and has no impact on behavior (or it could be selected in or out), then we have no reason to think that aliens will create more or less conscious descendants than us. Consciousness needs to have a significant impact on behavior to change the chance that (artificial) descendants are conscious. But the larger the effect of consciousness on behaviors, the more likely consciousness is to be a result of evolution/selection.
We don’t understand much about how the consciousness of SFC creators would influence the consciousness of (artificial) SFC descendants. Even if Humans are abnormal in being conscious, it is very uncertain how much that changes how likely our (artificial) descendants are to be conscious.
I am very happy to get pushback and to debate the strength of the “consciousness argument” on Humanity’s expected utility.
Thanks for your reply, lots of interesting points :)
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
I particularly appreciate that reframing of consciousness. I think it’s probably both binary and continuous though. Binary in the sense that you need a “machinery” that’s capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with “capable machinery” that’s intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range).
So then any alien civilisation would either be “conscious enough” or “not conscious at all”, conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.
So I’m not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a ‘perfect’ theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there’s an extremely high probability of Civ-Similarity Hypothesis being true.
I’m very speculative about making moral decisions concerning the donations of potentially millions of dollars based on something so speculative. I think it’s too far down the EA crazy train to prioritise different causes based on the density of alien civilisations. It’s probably more speculative than the simulation hypothesis (which, if true, significantly increases the likelihood that you are the only sentient being in this universe), but we don’t make moral decisions based on that.
I get that there’s been a lot of work on this and that we can make progress on it (I know, I’m an astrobiologist), but I’m sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren’t there yet. The universe is so enormous and bonkers and our brains are so small—we can make numerical estimates sure, but creating a number doesn’t necessarily mean we have more certainty.
I’ve got a big moral circle (all sentient beings and their descendants), but it does not extend to aliens because of cluelessness.
I think you’re posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don’t know. But it’s possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we’re the only conscious life, and I think it’s very important that we act like it until proven otherwise.
So for now, I’m quite confident that if we’re thinking about the moral utility of spacefaring civilisation, we should at least limit our scope to our own civilisation, more specifically, our own sentience and its descendants (I personally prefer to limit that scope even further to the next few thousand years, or just our Solar System to reduce the ambiguity a bit—longtermism still stands strong with this huge limitation). I think the main value in looking into the potential density of aliens in the universe helps figure out what our own future might look like. Even if humans only colonise the Solar System because alien SFCs colonise the galaxy, that’s still 10^27 potential future lives (1.2 sextillion over the next 6000 years; future life equivalents based on the Solar System’s carrying capacity; as opposed to 100 trillion if we stay on Earth till its destruction). We can control and predict that to an extent, and there’s enough ambiguity and cluelessness already associated with how to make human civilisation’s future in space good in the context of AI—but we can at least make some concrete decisions (e.g. work by Simon Institute & CLR).
Very interesting post though! Lots to think about and I can see that this could be the most important moral consideration… maybe… I look forward to your series and I definitely think it’s worthwhile to try and figure out what that consideration might be.
I agree that the particular guesses we make about aliens will be very speculative/arbitrary. But “we shouldn’t take the action recommended by our precise ‘best guess’ about XYZ” does not imply “we can set the expected contribution of XYZ to the value of our interventions to 0″. I think if you buy cluelessness — in particular, the indeterminate beliefs framing on cluelessness — the lesson you should take from Maxime’s post is that we simply aren’t justified in saying any intervention with effects on x-risk is net-positive or net-negative (w.r.t. total welfare of sentient beings).
I somewhat agree with your points. Here are some contributions, and pushbacks:
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
If human consciousness is a random fluke and has no impact on behavior (or it could be selected in or out), then we have no reason to think that aliens will create more or less conscious descendants than us. Consciousness needs to have a significant impact on behavior to change the chance that (artificial) descendants are conscious. But the larger the effect of consciousness on behaviors, the more likely consciousness is to be a result of evolution/selection.
We don’t understand much about how the consciousness of SFC creators would influence the consciousness of (artificial) SFC descendants. Even if Humans are abnormal in being conscious, it is very uncertain how much that changes how likely our (artificial) descendants are to be conscious.
I am very happy to get pushback and to debate the strength of the “consciousness argument” on Humanity’s expected utility.
Thanks for your reply, lots of interesting points :)
I particularly appreciate that reframing of consciousness. I think it’s probably both binary and continuous though. Binary in the sense that you need a “machinery” that’s capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with “capable machinery” that’s intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range).
So then any alien civilisation would either be “conscious enough” or “not conscious at all”, conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.
So I’m not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a ‘perfect’ theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there’s an extremely high probability of Civ-Similarity Hypothesis being true.