Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.
Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.