You seem to be assuming moral realism, so that “Utilitarianism seems to imply …” gets interpreted as “If utilitarianism is the true moral theory for everyone, then everyone should …” whereas I’m uncertain about that. In particular if moral relativism, subjectivism, or anti-realism is true, then “Utilitarianism seems to imply …” has to be interpreted as “If utilitarianism is the right moral theory for someone or represents their moral preferences, then that person should …” So I think given my meta-ethical uncertainty, the way I phrased my statement actually does make sense. (Maybe it’s skewed a bit towards anti-realism by implicature, but is at least correct in a literal sense even if realism is true.)
I think that “utilitarianism seems to imply that humans who are utilitarians should...” is a type error regardless of whether you’re a realist or an anti-realist, in the same way as “the ZFC axioms imply that humans who accept those axioms should believe 1+1=2″. That’s not what the ZFC axioms imply—actually, they just imply that 1+1 = 2, and it’s our meta-theory of mathematics which determines how you respond to this fact. Similarly, utilitarianism is a theory which, given some actions (or maybe states of the world, or maybe policies) returns a metric for how “right” or “good” they are. And then how we relate to that theory depends on our meta-ethics.
Given how confusing talking about morality is, I think it’s important to be able to separate the object-level moral theories from meta-ethical theories in this way. (For more along these lines, see my post here).
Given how confusing talking about morality is, I think it’s important to be able to separate the object-level moral theories from meta-ethical theories in this way. (For more along these lines, see my post here).
I’d have to think more about this, but my initial reaction is that this makes sense. How would you suggest changing my sentence in light of this? The following is the best I can come up with. Do you see anything still wrong with it, or any further improvements that can be made?
“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”
My first objection is that you’re using a different form of “should” than what is standard. My preferred interpretation of “X should do Y” is that it’s equivalent to “I endorse some moral theory T and T endorses X doing Y”. (Or “according to utilitarianism, X should do Y” is more simply equivalent to “utilitarianism endorses X doing Y”). In this case, “should” feels like it’s saying something morally normative.
Whereas you seem to be using “should” as in “a person who has a preference X should act on X”. In this case, should feels like it’s saying something epistemically normative. You may think these are the same thing, but I don’t, and either way it’s confusing to build that assumption into our language. I’d prefer to replace this latter meaning of “should” with “it is rational to”. So then we get:
“it is rational for humans who are utilitarians to commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action.”
My second objection is that this is only the case if “being a utilitarian” is equivalent to “having only one preference, which is to follow utilitarianism”. In practice people have both moral preferences and also personal preferences. I’d still count someone as being a utilitarian if they follow their personal preferences instead of their moral preferences some (or even most) of the time. So then it’s not clear whether it’s rational for a human who is a utilitarian to commit suicide in this case; it depends on the contents of their personal preferences.
I think we avoid all of this mess just by saying “Utilitarianism endorses replacing existing humans with these new beings.” This is, as I mentioned earlier, a similar claim to “ZFC implies that 1 + 1 = 2″, and it allows people to have fruitful discussions without agreeing on whether they should endorse utilitarianism. I’d also be happy with Simon’s version above: “Utilitarianism seems to imply that humans should...”, although I think it’s slightly less precise than mine, because it introduces an unnecessary “should” that some people might take to be a meta-level claim rather than merely a claim about the content of the theory of utilitarianism (this is a minor quibble though. Analogously: “ZFC implies that 1 + 1 = 2 is true”).
Anyway, we have pretty different meta-ethical views, and I’m not sure how much we’re going to converge, but I will say that from my perspective, your conflation of epistemic and moral normativity (as I described earlier) is a key component of why your position seems confusing to me.
I originally wrote a different response to Wei’s comment, but it wasn’t direct enough. I’m copying the first part here since it may be helpful in explaining what I mean by “moral preferences” vs “personal preferences”:
Each person has a range of preferences, which it’s often convenient to break down into “moral preferences” and “personal preferences”. This isn’t always a clear distinction, but the main differences:
1. Moral preferences are much more universalisable and less person-specific (e.g. “I prefer that people aren’t killed” vs “I prefer that I’m not killed”).
2. Moral preferences are associated with a meta-preference that everyone has the same moral preferences. This is why we feel so strongly that we need to find a shared moral “truth”. Fortunately, most people are in agreement in our societies on the most basic moral questions.
3. Moral preferences are associated with a meta-preference that they are consistent, simple, and actionable. This is why we feel so strongly that we need to find coherent moral theories rather than just following our intuitions.
4. Moral preferences are usually phrased as “X is right/wrong” and “people should do right and not do wrong” rather than “I prefer X”. This often misleads people into thinking that their moral preferences are just pointers to some aspect of reality, the “objective moral truth”, which is what people “objectively should do”.
When we reflect on our moral preferences and try to make them more consistent and actionable, we often end up condensing our initial moral preferences (aka moral intuitions) into moral theories like utilitarianism. Note that we could do this for other preferences as well (e.g. “my theory of food is that I prefer things which have more salt than sugar”) but because I don’t have strong meta-preferences about my food preferences, I don’t bother doing so.
The relationship between moral preferences and personal preferences can be quite complicated. People act on both, but often have a meta-preference to pay more attention to their moral preferences than they currently do. I’d count someone as a utilitarian if they have moral preferences that favour utilitarianism, and these are a non-negligible component of their overall preferences.
As I mentioned earlier, I am uncertain about meta-ethics, so I was trying to craft a sentence that would be true under a number of different meta-ethical theories. I wrote “should” instead of “it is rational to” because under moral realism that “should” could be interpreted as a “moral should” while under anti-realism it could be interpreted as an “epistemic should”. (I also do think there may be something in common between moral and epistemic normativity but that’s not my main motivation.) Your suggestion “Utilitarianism endorses replacing existing humans with these new beings.” would avoid this issue, but the main reason I wrote my original comment was to create a thought experiment where concerns about moral uncertainty and contractarianism clearly do not apply, and “Utilitarianism endorses replacing existing humans with these new beings.” doesn’t really convey that since you could say that even in scenarios where moral uncertainty and contractarianism do apply.
Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).
Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right if and only if …’. But regardless of whether I would be speaking about myself or everyone, we can still talk about what the criterion of rightness (the theory) implies in the sense that one can check which actions satisfy the criteria. So we can say: according to the theory formulated as ‘an act is right if and only if…’ this act X would be right (simply because it satisfies the criteria). A simpler example is if we understand the principle ‘lying is wrong’ from an antirealist perspective. Assuming we specify what counts as lying, we can still talk about whether an act is a case of lying and hence wrong, according to this principle. And then one can discuss whether the theory or principle is appealing, given which acts it classifies as right and wrong. If repugnant action X is classified as right or if something obviously admirable act is classified as wrong, we may want to reject the theory/criterion, regardless of realism or antirealism.
Maybe all I’m saying is obvious and compatible with what you are saying.
I think there is at least one plausible meta-ethical position under which when I say “I think utilitarianism is right.” I just mean something like “I think that after I reach reflective equilibrium my preferences will be well-described by utilitarianism.” and it is not intended to mean that I think utilitarianism is right for anyone else or applies to anyone else or should apply to anyone else (except insofar as they are sufficiently similar to myself in the relevant ways and therefore are likely to reach a similar reflective equilibrium). Do you agree this is a plausible meta-ethical position? If yes, does my sentence (or the new version that I gave in the parallel thread) make more sense in light of this? In either case, how would you suggest that I rephrase my sentence to make it better?
Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
I gather the metaethical position you describe is something like one of the following three:
(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act I perform is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
This (1) was about which of your actions will be right. Alternatively, the metaethical position could be as follows:
(2) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act anyone performs is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
Or perhaps formulating it in terms of want or preference instead of rightness, like the following, better describes your metaethical position (using utilitarianism as just an example):
(3) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will want or have a preference for that everyone act in a way that results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
My impression is that in the academic literature, metaethical theories/positions are usually, always or almost always formulated as general claims about what, for example, statements such as ‘one ought to be honest’ means; the metaethical theories/position do not have the form ‘when I say “one ought to be honest” I mean …’ But, sure, talking, as you do, about what you mean when you say ‘I think utilitarianism is right’ sounds fine.
The new version of your thought experiment sounds fine, which I gather would go something like the following:
Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace humans with beings of greater well-being. (Assume it would be optimal, all things considered, to kill and replace humans.) Utilitarianism seems to imply that at least humans who are utilitarians should commit mass suicide (or accept being killed) in order to bring the new beings into existence, because that’s what utilitarianism implies is the optimal and hence morally right action in that situation.
You seem to be assuming moral realism, so that “Utilitarianism seems to imply …” gets interpreted as “If utilitarianism is the true moral theory for everyone, then everyone should …” whereas I’m uncertain about that. In particular if moral relativism, subjectivism, or anti-realism is true, then “Utilitarianism seems to imply …” has to be interpreted as “If utilitarianism is the right moral theory for someone or represents their moral preferences, then that person should …” So I think given my meta-ethical uncertainty, the way I phrased my statement actually does make sense. (Maybe it’s skewed a bit towards anti-realism by implicature, but is at least correct in a literal sense even if realism is true.)
I think that “utilitarianism seems to imply that humans who are utilitarians should...” is a type error regardless of whether you’re a realist or an anti-realist, in the same way as “the ZFC axioms imply that humans who accept those axioms should believe 1+1=2″. That’s not what the ZFC axioms imply—actually, they just imply that 1+1 = 2, and it’s our meta-theory of mathematics which determines how you respond to this fact. Similarly, utilitarianism is a theory which, given some actions (or maybe states of the world, or maybe policies) returns a metric for how “right” or “good” they are. And then how we relate to that theory depends on our meta-ethics.
Given how confusing talking about morality is, I think it’s important to be able to separate the object-level moral theories from meta-ethical theories in this way. (For more along these lines, see my post here).
I’d have to think more about this, but my initial reaction is that this makes sense. How would you suggest changing my sentence in light of this? The following is the best I can come up with. Do you see anything still wrong with it, or any further improvements that can be made?
“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”
My first objection is that you’re using a different form of “should” than what is standard. My preferred interpretation of “X should do Y” is that it’s equivalent to “I endorse some moral theory T and T endorses X doing Y”. (Or “according to utilitarianism, X should do Y” is more simply equivalent to “utilitarianism endorses X doing Y”). In this case, “should” feels like it’s saying something morally normative.
Whereas you seem to be using “should” as in “a person who has a preference X should act on X”. In this case, should feels like it’s saying something epistemically normative. You may think these are the same thing, but I don’t, and either way it’s confusing to build that assumption into our language. I’d prefer to replace this latter meaning of “should” with “it is rational to”. So then we get:
“it is rational for humans who are utilitarians to commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action.”
My second objection is that this is only the case if “being a utilitarian” is equivalent to “having only one preference, which is to follow utilitarianism”. In practice people have both moral preferences and also personal preferences. I’d still count someone as being a utilitarian if they follow their personal preferences instead of their moral preferences some (or even most) of the time. So then it’s not clear whether it’s rational for a human who is a utilitarian to commit suicide in this case; it depends on the contents of their personal preferences.
I think we avoid all of this mess just by saying “Utilitarianism endorses replacing existing humans with these new beings.” This is, as I mentioned earlier, a similar claim to “ZFC implies that 1 + 1 = 2″, and it allows people to have fruitful discussions without agreeing on whether they should endorse utilitarianism. I’d also be happy with Simon’s version above: “Utilitarianism seems to imply that humans should...”, although I think it’s slightly less precise than mine, because it introduces an unnecessary “should” that some people might take to be a meta-level claim rather than merely a claim about the content of the theory of utilitarianism (this is a minor quibble though. Analogously: “ZFC implies that 1 + 1 = 2 is true”).
Anyway, we have pretty different meta-ethical views, and I’m not sure how much we’re going to converge, but I will say that from my perspective, your conflation of epistemic and moral normativity (as I described earlier) is a key component of why your position seems confusing to me.
I originally wrote a different response to Wei’s comment, but it wasn’t direct enough. I’m copying the first part here since it may be helpful in explaining what I mean by “moral preferences” vs “personal preferences”:
Each person has a range of preferences, which it’s often convenient to break down into “moral preferences” and “personal preferences”. This isn’t always a clear distinction, but the main differences:
1. Moral preferences are much more universalisable and less person-specific (e.g. “I prefer that people aren’t killed” vs “I prefer that I’m not killed”).
2. Moral preferences are associated with a meta-preference that everyone has the same moral preferences. This is why we feel so strongly that we need to find a shared moral “truth”. Fortunately, most people are in agreement in our societies on the most basic moral questions.
3. Moral preferences are associated with a meta-preference that they are consistent, simple, and actionable. This is why we feel so strongly that we need to find coherent moral theories rather than just following our intuitions.
4. Moral preferences are usually phrased as “X is right/wrong” and “people should do right and not do wrong” rather than “I prefer X”. This often misleads people into thinking that their moral preferences are just pointers to some aspect of reality, the “objective moral truth”, which is what people “objectively should do”.
When we reflect on our moral preferences and try to make them more consistent and actionable, we often end up condensing our initial moral preferences (aka moral intuitions) into moral theories like utilitarianism. Note that we could do this for other preferences as well (e.g. “my theory of food is that I prefer things which have more salt than sugar”) but because I don’t have strong meta-preferences about my food preferences, I don’t bother doing so.
The relationship between moral preferences and personal preferences can be quite complicated. People act on both, but often have a meta-preference to pay more attention to their moral preferences than they currently do. I’d count someone as a utilitarian if they have moral preferences that favour utilitarianism, and these are a non-negligible component of their overall preferences.
I found this very helpful.
As I mentioned earlier, I am uncertain about meta-ethics, so I was trying to craft a sentence that would be true under a number of different meta-ethical theories. I wrote “should” instead of “it is rational to” because under moral realism that “should” could be interpreted as a “moral should” while under anti-realism it could be interpreted as an “epistemic should”. (I also do think there may be something in common between moral and epistemic normativity but that’s not my main motivation.) Your suggestion “Utilitarianism endorses replacing existing humans with these new beings.” would avoid this issue, but the main reason I wrote my original comment was to create a thought experiment where concerns about moral uncertainty and contractarianism clearly do not apply, and “Utilitarianism endorses replacing existing humans with these new beings.” doesn’t really convey that since you could say that even in scenarios where moral uncertainty and contractarianism do apply.
Using those two different types of “should” makes your proposed sentence (“It seems that (at least) the humans who are utilitarians should commit mass suicide in order to bring the new beings into existence, because that’s what utilitarianism implies is the right action in that situation.”) unnecessarily confusing, for a couple of reasons.
1. Most moral anti-realists don’t use “epistemic should” when talking about morality. Instead, I claim, they use my definition of moral should: “X should do Y means that I endorse/prefer some moral theory T and T endorses X doing Y”. (We can test this by asking anti-realists who don’t subscribe to negative utilitarianism whether a negative utilitarian should destroy the universe—I predict they will either say “no” or argue that the question is ambiguous.) And so introducing “epistemic should” makes moral talk more difficult.
2. Moral realists who are utilitarians and use “moral should” would agree with your proposed sentence, and moral anti-realists who aren’t utilitarians and use “epistemic should” would also agree with your sentence, but for two totally different reasons. This makes follow-up discussions much more difficult.
How about “Utilitarianism endorses humans voluntarily replacing themselves with these new beings.” That gets rid of (most of) the contractarianism. I don’t think there’s any clean, elegant phrasing which then rules out the moral uncertainty in a way that’s satisfactory to both realists and anti-realists, unfortunately—because realists and anti-realists disagree on whether, if you prefer/endorse a theory, that makes it rational for you to act on that theory. (In other words, I don’t know whether moral realists have terminology which distinguishes between people who act on false theories that they currently endorse, versus people who act on false theories they currently don’t endorse).
Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right if and only if …’. But regardless of whether I would be speaking about myself or everyone, we can still talk about what the criterion of rightness (the theory) implies in the sense that one can check which actions satisfy the criteria. So we can say: according to the theory formulated as ‘an act is right if and only if…’ this act X would be right (simply because it satisfies the criteria). A simpler example is if we understand the principle ‘lying is wrong’ from an antirealist perspective. Assuming we specify what counts as lying, we can still talk about whether an act is a case of lying and hence wrong, according to this principle. And then one can discuss whether the theory or principle is appealing, given which acts it classifies as right and wrong. If repugnant action X is classified as right or if something obviously admirable act is classified as wrong, we may want to reject the theory/criterion, regardless of realism or antirealism.
Maybe all I’m saying is obvious and compatible with what you are saying.
I think there is at least one plausible meta-ethical position under which when I say “I think utilitarianism is right.” I just mean something like “I think that after I reach reflective equilibrium my preferences will be well-described by utilitarianism.” and it is not intended to mean that I think utilitarianism is right for anyone else or applies to anyone else or should apply to anyone else (except insofar as they are sufficiently similar to myself in the relevant ways and therefore are likely to reach a similar reflective equilibrium). Do you agree this is a plausible meta-ethical position? If yes, does my sentence (or the new version that I gave in the parallel thread) make more sense in light of this? In either case, how would you suggest that I rephrase my sentence to make it better?
Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
I gather the metaethical position you describe is something like one of the following three:
(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act I perform is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
This (1) was about which of your actions will be right. Alternatively, the metaethical position could be as follows:
(2) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act anyone performs is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
Or perhaps formulating it in terms of want or preference instead of rightness, like the following, better describes your metaethical position (using utilitarianism as just an example):
(3) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will want or have a preference for that everyone act in a way that results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’
My impression is that in the academic literature, metaethical theories/positions are usually, always or almost always formulated as general claims about what, for example, statements such as ‘one ought to be honest’ means; the metaethical theories/position do not have the form ‘when I say “one ought to be honest” I mean …’ But, sure, talking, as you do, about what you mean when you say ‘I think utilitarianism is right’ sounds fine.
The new version of your thought experiment sounds fine, which I gather would go something like the following:
Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace humans with beings of greater well-being. (Assume it would be optimal, all things considered, to kill and replace humans.) Utilitarianism seems to imply that at least humans who are utilitarians should commit mass suicide (or accept being killed) in order to bring the new beings into existence, because that’s what utilitarianism implies is the optimal and hence morally right action in that situation.