As I mentioned earlier, I am uncertain about meta-ethics, so I was trying to craft a sentence that would be true under a number of different meta-ethical theories. I wrote “should” instead of “it is rational to” because under moral realism that “should” could be interpreted as a “moral should” while under anti-realism it could be interpreted as an “epistemic should”. (I also do think there may be something in common between moral and epistemic normativity but that’s not my main motivation.) Your suggestion “Utilitarianism endorses replacing existing humans with these new beings.” would avoid this issue, but the main reason I wrote my original comment was to create a thought experiment where concerns about moral uncertainty and contractarianism clearly do not apply, and “Utilitarianism endorses replacing existing humans with these new beings.” doesn’t really convey that since you could say that even in scenarios where moral uncertainty and contractarianism do apply.
Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).
As I mentioned earlier, I am uncertain about meta-ethics, so I was trying to craft a sentence that would be true under a number of different meta-ethical theories. I wrote “should” instead of “it is rational to” because under moral realism that “should” could be interpreted as a “moral should” while under anti-realism it could be interpreted as an “epistemic should”. (I also do think there may be something in common between moral and epistemic normativity but that’s not my main motivation.) Your suggestion “Utilitarianism endorses replacing existing humans with these new beings.” would avoid this issue, but the main reason I wrote my original comment was to create a thought experiment where concerns about moral uncertainty and contractarianism clearly do not apply, and “Utilitarianism endorses replacing existing humans with these new beings.” doesn’t really convey that since you could say that even in scenarios where moral uncertainty and contractarianism do apply.
Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).