Welcome to the Forum (or to posting)! I found this post interesting, and think it makes important points.
There’s been a bit of prior discussion of these sorts of ideas (though I think the sort of model you propose is interesting and hasn’t been proposed before). For example:
I’ve collected these sources because I’m working on a post/series about “crucial questions for longtermists”. One of the questions I have in there is “What’s the counterfactual to a human-influenced future?” I break this down into:
How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?
I think your post does a good job highlighting the importance of the question “How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe?”
But it seemed like you were implicitly assuming that what other moral agents would ultimately do with the future would be equally valuable in expectation to what humanity would do? This seems a big question to me, and probably depends somewhat on metaethics (e.g., moral realism vs antirealism). From memory, there’s some good discussion of this in “The expected value of extinction risk reduction is positive”. (This is also related to Azure’s comment.)
And I feel like these questions might be best addressed alongside the question about ETI. One reason for that is that discussions of the Fermi Paradox, Drake Equation, and Great Filter (see, e.g., this paper) could perhaps inform our beliefs about the likelihood of both ETI and future evolution of moral agents on Earth.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being “morally aligned” with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I’m uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.
Welcome to the Forum (or to posting)! I found this post interesting, and think it makes important points.
There’s been a bit of prior discussion of these sorts of ideas (though I think the sort of model you propose is interesting and hasn’t been proposed before). For example:
Existential risks are not just about humanity
The expected value of extinction risk reduction is positive (particularly the section “Whether (post-)humans colonizing space is good or bad, space colonization by other agents seems worse”)
How Would Catastrophic Risks Affect Prospects for Compromise? (particularly the section “Might humans be replaced by other species?”)
These comments
I’ve collected these sources because I’m working on a post/series about “crucial questions for longtermists”. One of the questions I have in there is “What’s the counterfactual to a human-influenced future?” I break this down into:
How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?
I think your post does a good job highlighting the importance of the question “How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe?”
But it seemed like you were implicitly assuming that what other moral agents would ultimately do with the future would be equally valuable in expectation to what humanity would do? This seems a big question to me, and probably depends somewhat on metaethics (e.g., moral realism vs antirealism). From memory, there’s some good discussion of this in “The expected value of extinction risk reduction is positive”. (This is also related to Azure’s comment.)
And I feel like these questions might be best addressed alongside the question about ETI. One reason for that is that discussions of the Fermi Paradox, Drake Equation, and Great Filter (see, e.g., this paper) could perhaps inform our beliefs about the likelihood of both ETI and future evolution of moral agents on Earth.
Hi Michael, thank you very much for your comment.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
“While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.”
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being “morally aligned” with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I’m uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the “crucial questions for longtermists” series sounds useful! We should hopefully publish the first post this month.
This seems a reasonable assumption. And I think it would indeed mean that it’s not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be “less valuable” than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they’d do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs “just” wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don’t think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.