I don’t get this intuition. If you have significant sympathies towards (e.g.) hedonic utilitarianism as part of your moral parliament (which I currently do), you probably should still think that humans and our endorsed successors are more likely to converge to hedonic utilitarianism than arbitrarily evolved aliens do.
It depends why you have those sympathies. If you think they just formed because you find them aesthetically pleasing, then sure. If you think there’s some underlying logic to them (which I do, and I would venture a decent fraction of utilitarians do) then why wouldn’t you expect intelligent aliens to uncover the same logic?
I think I have those sympathies because I’m an evolved being and this is a contigent fact at least of a) me being evolved and b) being socially evolved. I think it’s also possible that there are details very specific to being a primate/human/WEIRD human specifically that’s relevant to utilitarianism, though I currently don’t think this is the most likely hypothesis[1].
If you think there’s some underlying logic to them (which I do, and I would venture a decent fraction of utilitarians do) then why wouldn’t you expect intelligent aliens to uncover the same logic?
I think I understand this argument. The claim is that if moral realism is true, and utilitarianism is correct under moral realism, then aliens will independently converge to utilitarianism.
If I understand the argument correctly, it’s the type of argument that makes sense syllogistically, but quickly falls apart probabilistically. Even if you assign only a 20% probability that utilitarianism is contingently human, this is all-else-equal enough to favor a human future, or the future of our endorsed descendants.
Now “all-else-equal” may not be true. But to argue that, you’d probably need to advance a position that somehow aliens are more likely than humans to discover the moral truths of utilitarianism (assuming moral realism is true), or that aliens are more or equally likely than humans to contingently favor your preferred branch of consequentialist morality.
[1] eg I’d think it’s more likely than not that sufficiently smart rats or elephants will identify with something akin to utilitarianism. Obviously not something I could have any significant confidence in.
Even if you assign only a 20% probability that utilitarianism is contingently human, this is all-else-equal enough to favor a human future, or the future of our endorsed descendants.
This seems much too strong a claim if it’s supposed to be action-relevantly significant to whether we support SETI or decide whether to focus on existential risk. There are countless factors that might persuade EAs to support or oppose such programs—a belief in moral convergence should update you somewhat towards support (moral realism isn’t necessary); a belief in nonconvergence would therefore do the opposite—and the proportionate credences are going to matter.
I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.
I don’t get this intuition. If you have significant sympathies towards (e.g.) hedonic utilitarianism as part of your moral parliament (which I currently do), you probably should still think that humans and our endorsed successors are more likely to converge to hedonic utilitarianism than arbitrarily evolved aliens do.
(I might be missing something obvious however)
It depends why you have those sympathies. If you think they just formed because you find them aesthetically pleasing, then sure. If you think there’s some underlying logic to them (which I do, and I would venture a decent fraction of utilitarians do) then why wouldn’t you expect intelligent aliens to uncover the same logic?
I think I have those sympathies because I’m an evolved being and this is a contigent fact at least of a) me being evolved and b) being socially evolved. I think it’s also possible that there are details very specific to being a primate/human/WEIRD human specifically that’s relevant to utilitarianism, though I currently don’t think this is the most likely hypothesis[1].
I think I understand this argument. The claim is that if moral realism is true, and utilitarianism is correct under moral realism, then aliens will independently converge to utilitarianism.
If I understand the argument correctly, it’s the type of argument that makes sense syllogistically, but quickly falls apart probabilistically. Even if you assign only a 20% probability that utilitarianism is contingently human, this is all-else-equal enough to favor a human future, or the future of our endorsed descendants.
Now “all-else-equal” may not be true. But to argue that, you’d probably need to advance a position that somehow aliens are more likely than humans to discover the moral truths of utilitarianism (assuming moral realism is true), or that aliens are more or equally likely than humans to contingently favor your preferred branch of consequentialist morality.
[1] eg I’d think it’s more likely than not that sufficiently smart rats or elephants will identify with something akin to utilitarianism. Obviously not something I could have any significant confidence in.
This seems much too strong a claim if it’s supposed to be action-relevantly significant to whether we support SETI or decide whether to focus on existential risk. There are countless factors that might persuade EAs to support or oppose such programs—a belief in moral convergence should update you somewhat towards support (moral realism isn’t necessary); a belief in nonconvergence would therefore do the opposite—and the proportionate credences are going to matter.
I agree that convergence-in-the-sense-of-all-spacefaring-aliens-will-converge is more relevant here than realism.
“and the proportionate credences are going to matter.” I don’t think they do under realistic ranges in credences, in the same way that I don’t think there are many actions that are decision-guidingly different if you have a 20% vs 80% risk this century of AI doom. I agree that if you have very high confidence in a specific proposition (say 100:1 or 1000:1 or higher, maybe?) this might be enough to confidently swing you to a particular position.
I don’t have a model on hand though, just fairly simple numerical intuitions.