Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.
Maxwell—yep, that makes sense. Counterfactual comparisons are much easier when comparing relatively known optinos, e.g. ‘Here’s what humans are like, as sentient, sapient, moral beings’ vs. ‘Here’s what racoons could evolve into, in 10 million years, as sentient, sapient, moral beings’.
In some ways it seems much, much harder to predict what ETIs might be like, compared to us. However, the paper I linked (here ) argues that some of the evolutionary principles might be similar enough that we can make some reasonable guesses.
However, that only applies to the base-level, naturally evolved ETIs. Once they start self-selecting, self-engineering, and building AIs, those might deviate quite dramatically from the naturally evolved instincts and abilities that we can predict just from evolutionary principles, game theory, signaling theory, foraging theory, etc.