I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.
I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.