Thank you for reading and for your insightful reply!
I think you’ve correctly pointed out one of the cruxes of the argument: That humans have average “quality of sentience” as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.
So far, we don’t have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their “moral characteristics,” whatever that might mean, will be very important for the reasons you point out.
Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.
Thank you for reading and for your insightful reply!
I think you’ve correctly pointed out one of the cruxes of the argument: That humans have average “quality of sentience” as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.
So far, we don’t have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their “moral characteristics,” whatever that might mean, will be very important for the reasons you point out.
Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.