Maxwell—this is an interesting and provocative argument that I haven’t seen before (although I’d be slightly surprised if nobody else has made similar arguments before). I think it’s worth taking seriously.
I think this is analogous to arguments like
If other potential parents are going to have kids anyway, why should I have kids that pass along my particular genes and family traditions?
If other universities are going to keep teaching and doing research, why should I worry about whether my particular university lasts?
If dominant languages (such as English, Mandarin, and Hindi) persist, and allow easier global communication, why should we worry about whether rarer languages like French, Arabic, or Bengali persist?
If other nation-states, cultures, and civilizations are likely to flourish anyway, why should I worry the fate of my particular nation-state, culture, or civilization?
If other organic life-forms on Earth are likely to evolve high intelligence and technological civilizations, sooner or later, why should we worry about what happens to humanity?
We have strong gut-level intuitions that our particular legacy matters—whether genetic, cultural, linguistic, civilizational, or species-level. But does our particular legacy matter, or is this just an evolved self-deception to motivate parenting, tribalism, and speciesism?
At the level of overall quantity and quality of sentience, if might not matter that much whether humanity colonizes the galaxy (or light-cone), or whether some other (set of) intelligent species does.
However, I think one could make an argument that a diversity of genes, languages, civilizations, and space-faring species yields greater resilience, adaptability, complexity, and interest than a galactic monoculture would.
Also, it’s not obvious that ‘everything that rises must converge’, in terms of the specific details of how intelligent, sentient life experiences the universe. Maybe all intelligent life converges onto similar kinds of sentience. But maybe there’s actually some divergence of sentient experience, such that our descendants (whatever forms they take) will have some qualitatively unique kinds of experiences that other intelligent species would not—and vice-versa. Given how evolution works, there may be a fair amount of overlap in psychology across extra-terrestrial intelligence (a point we’ve addressed in this paper) -- but the overlap may not be high enough that we should consider ourselves entirely replaceable.
I can imagine counter-arguments to this view. Maybe the galaxy would be better off colonized by a unified, efficient, low-conflict Borg civilization, or a sentient version of what Iain M. Banks called an ‘Aggressive Hegemonizing Swam’. With a higher diversity of intelligent life-forms comes a higher likelihood of potentially catastrophic conflict.
I guess the key question is whether we think humanity, with all of our distinctive quirks, has something unique to contribute to the richness of a galactic meta-civilization—or whether whatever we evolve into would do more harm than good, if other smart ETIs already exist.
Thank you for reading and for your insightful reply!
I think you’ve correctly pointed out one of the cruxes of the argument: That humans have average “quality of sentience” as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.
So far, we don’t have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their “moral characteristics,” whatever that might mean, will be very important for the reasons you point out.
Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.
Maxwell—this is an interesting and provocative argument that I haven’t seen before (although I’d be slightly surprised if nobody else has made similar arguments before). I think it’s worth taking seriously.
I think this is analogous to arguments like
If other potential parents are going to have kids anyway, why should I have kids that pass along my particular genes and family traditions?
If other universities are going to keep teaching and doing research, why should I worry about whether my particular university lasts?
If dominant languages (such as English, Mandarin, and Hindi) persist, and allow easier global communication, why should we worry about whether rarer languages like French, Arabic, or Bengali persist?
If other nation-states, cultures, and civilizations are likely to flourish anyway, why should I worry the fate of my particular nation-state, culture, or civilization?
If other organic life-forms on Earth are likely to evolve high intelligence and technological civilizations, sooner or later, why should we worry about what happens to humanity?
We have strong gut-level intuitions that our particular legacy matters—whether genetic, cultural, linguistic, civilizational, or species-level. But does our particular legacy matter, or is this just an evolved self-deception to motivate parenting, tribalism, and speciesism?
At the level of overall quantity and quality of sentience, if might not matter that much whether humanity colonizes the galaxy (or light-cone), or whether some other (set of) intelligent species does.
However, I think one could make an argument that a diversity of genes, languages, civilizations, and space-faring species yields greater resilience, adaptability, complexity, and interest than a galactic monoculture would.
Also, it’s not obvious that ‘everything that rises must converge’, in terms of the specific details of how intelligent, sentient life experiences the universe. Maybe all intelligent life converges onto similar kinds of sentience. But maybe there’s actually some divergence of sentient experience, such that our descendants (whatever forms they take) will have some qualitatively unique kinds of experiences that other intelligent species would not—and vice-versa. Given how evolution works, there may be a fair amount of overlap in psychology across extra-terrestrial intelligence (a point we’ve addressed in this paper) -- but the overlap may not be high enough that we should consider ourselves entirely replaceable.
I can imagine counter-arguments to this view. Maybe the galaxy would be better off colonized by a unified, efficient, low-conflict Borg civilization, or a sentient version of what Iain M. Banks called an ‘Aggressive Hegemonizing Swam’. With a higher diversity of intelligent life-forms comes a higher likelihood of potentially catastrophic conflict.
I guess the key question is whether we think humanity, with all of our distinctive quirks, has something unique to contribute to the richness of a galactic meta-civilization—or whether whatever we evolve into would do more harm than good, if other smart ETIs already exist.
Thank you for reading and for your insightful reply!
I think you’ve correctly pointed out one of the cruxes of the argument: That humans have average “quality of sentience” as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.
So far, we don’t have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their “moral characteristics,” whatever that might mean, will be very important for the reasons you point out.
Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.