Even if you assign only a 20% probability that utilitarianism is contingently human, this is all-else-equal enough to favor a human future, or the future of our endorsed descendants.
This seems much too strong a claim if it’s supposed to be action-relevantly significant to whether we support SETI or decide whether to focus on existential risk. There are countless factors that might persuade EAs to support or oppose such programs—a belief in moral convergence should update you somewhat towards support (moral realism isn’t necessary); a belief in nonconvergence would therefore do the opposite—and the proportionate credences are going to matter.
I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.
This seems much too strong a claim if it’s supposed to be action-relevantly significant to whether we support SETI or decide whether to focus on existential risk. There are countless factors that might persuade EAs to support or oppose such programs—a belief in moral convergence should update you somewhat towards support (moral realism isn’t necessary); a belief in nonconvergence would therefore do the opposite—and the proportionate credences are going to matter.
I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.