Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).
Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).