I somewhat agree with your points. Here are some contributions, and pushbacks:
I get that there’s been a lot of work on this and that we can make progress on it (I know, I’m an astrobiologist), but I’m sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren’t there yet. The universe is so enormous and bonkers and our brains are so small—we can make numerical estimates sure, but creating a number doesn’t necessarily mean we have more certainty.
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
I think you’re posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don’t know. But it’s possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we’re the only conscious life, and I think it’s very important that we act like it until proven otherwise.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
If human consciousness is a random fluke and has no impact on behavior (or it could be selected in or out), then we have no reason to think that aliens will create more or less conscious descendants than us. Consciousness needs to have a significant impact on behavior to change the chance that (artificial) descendants are conscious. But the larger the effect of consciousness on behaviors, the more likely consciousness is to be a result of evolution/selection.
We don’t understand much about how the consciousness of SFC creators would influence the consciousness of (artificial) SFC descendants. Even if Humans are abnormal in being conscious, it is very uncertain how much that changes how likely our (artificial) descendants are to be conscious.
I am very happy to get pushback and to debate the strength of the “consciousness argument” on Humanity’s expected utility.
Thanks for your reply, lots of interesting points :)
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
I particularly appreciate that reframing of consciousness. I think it’s probably both binary and continuous though. Binary in the sense that you need a “machinery” that’s capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with “capable machinery” that’s intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range).
So then any alien civilisation would either be “conscious enough” or “not conscious at all”, conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.
So I’m not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a ‘perfect’ theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there’s an extremely high probability of Civ-Similarity Hypothesis being true.
I somewhat agree with your points. Here are some contributions, and pushbacks:
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
If human consciousness is a random fluke and has no impact on behavior (or it could be selected in or out), then we have no reason to think that aliens will create more or less conscious descendants than us. Consciousness needs to have a significant impact on behavior to change the chance that (artificial) descendants are conscious. But the larger the effect of consciousness on behaviors, the more likely consciousness is to be a result of evolution/selection.
We don’t understand much about how the consciousness of SFC creators would influence the consciousness of (artificial) SFC descendants. Even if Humans are abnormal in being conscious, it is very uncertain how much that changes how likely our (artificial) descendants are to be conscious.
I am very happy to get pushback and to debate the strength of the “consciousness argument” on Humanity’s expected utility.
Thanks for your reply, lots of interesting points :)
I particularly appreciate that reframing of consciousness. I think it’s probably both binary and continuous though. Binary in the sense that you need a “machinery” that’s capable of producing consciousness i.e. neurons in a brain seem to work. And then if you have that capable machinery, you then have the range from low to high consciousness, like we see on Earth. If intelligence is related to consciousness level as it seems to be on Earth, then I would expect that any alien with “capable machinery” that’s intelligent enough to become spacefaring would have consciousness high enough to satisfy my worries (though not necessarily at the top of the range).
So then any alien civilisation would either be “conscious enough” or “not conscious at all”, conditional on (a) the machinery of life being binary in its ability to produce a scale of consciousness and (b) consciousness being correlated with intelligence.
So I’m not betting on it. The stakes are so high (a universe devoid of sentience) that I would have to meet and test the consciousness of aliens with a ‘perfect’ theory of consciousness before I updated any strategy towards reducing P(ancestral-human SFC) even if there’s an extremely high probability of Civ-Similarity Hypothesis being true.