Welcome to the Forum (or to posting)! I found this post interesting, and think it makes important points.
Thereās been a bit of prior discussion of these sorts of ideas (though I think the sort of model you propose is interesting and hasnāt been proposed before). For example:
Iāve collected these sources because Iām working on a post/āseries about ācrucial questions for longtermistsā. One of the questions I have in there is āWhatās the counterfactual to a human-influenced future?ā I break this down into:
How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?
I think your post does a good job highlighting the importance of the question āHow likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe?ā
But it seemed like you were implicitly assuming that what other moral agents would ultimately do with the future would be equally valuable in expectation to what humanity would do? This seems a big question to me, and probably depends somewhat on metaethics (e.g., moral realism vs antirealism). From memory, thereās some good discussion of this in āThe expected value of extinction risk reduction is positiveā. (This is also related to Azureās comment.)
And I feel like these questions might be best addressed alongside the question about ETI. One reason for that is that discussions of the Fermi Paradox, Drake Equation, and Great Filter (see, e.g., this paper) could perhaps inform our beliefs about the likelihood of both ETI and future evolution of moral agents on Earth.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
āWhile the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.ā
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being āmorally alignedā with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as Iām uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the ācrucial questions for longtermistsā series sounds useful! We should hopefully publish the first post this month.
āWhile the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.ā
This seems a reasonable assumption. And I think it would indeed mean that itās not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be āless valuableā than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks theyād do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs ājustā wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/āshould actively disvalue.
(To be clear, I donāt think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.
Welcome to the Forum (or to posting)! I found this post interesting, and think it makes important points.
Thereās been a bit of prior discussion of these sorts of ideas (though I think the sort of model you propose is interesting and hasnāt been proposed before). For example:
Existential risks are not just about humanity
The expected value of extinction risk reduction is positive (particularly the section āWhether (post-)humans colonizing space is good or bad, space colonization by other agents seems worseā)
How Would Catastrophic Risks Affect Prospects for Compromise? (particularly the section āMight humans be replaced by other species?ā)
These comments
Iāve collected these sources because Iām working on a post/āseries about ācrucial questions for longtermistsā. One of the questions I have in there is āWhatās the counterfactual to a human-influenced future?ā I break this down into:
How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?
I think your post does a good job highlighting the importance of the question āHow likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe?ā
But it seemed like you were implicitly assuming that what other moral agents would ultimately do with the future would be equally valuable in expectation to what humanity would do? This seems a big question to me, and probably depends somewhat on metaethics (e.g., moral realism vs antirealism). From memory, thereās some good discussion of this in āThe expected value of extinction risk reduction is positiveā. (This is also related to Azureās comment.)
And I feel like these questions might be best addressed alongside the question about ETI. One reason for that is that discussions of the Fermi Paradox, Drake Equation, and Great Filter (see, e.g., this paper) could perhaps inform our beliefs about the likelihood of both ETI and future evolution of moral agents on Earth.
Hi Michael, thank you very much for your comment.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
āWhile the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios.ā
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being āmorally alignedā with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as Iām uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the ācrucial questions for longtermistsā series sounds useful! We should hopefully publish the first post this month.
This seems a reasonable assumption. And I think it would indeed mean that itās not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be āless valuableā than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks theyād do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs ājustā wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/āshould actively disvalue.
(To be clear, I donāt think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.