I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being “morally aligned” with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I’m uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.
Hi Michael, thank you very much for your comment.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being “morally aligned” with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I’m uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.