Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
From a quick read, your view seems to be similar to that of Gilbert Harman. See his paper “Moral Relativism Defended” and the his part of the book Moral Relativism and Moral Objectivity.
Comment 2:
My one criticism to offer after reading this is in regard to the way you choose to answer “Yes” to the question of whether people “have obligations” (which I put in quotes to communicate the fact that the phrase could be interpreted in different ways such that the correct answer to the question could be either yes or no depending on the interpretation):
While I can see how this way of defining what it means to have an obligation can definitely be useful when discussing moral philosophy and bring clarity to said discussions, I think it’s worth pointing out that how it could potentially be quite confusing when talking with people who aren’t familiar with your specific definition / the specific meaning you use.
For example, if you ask most people, “Am I obligated to not commit murder?” they would say, “Yes, of course.” And if you ask them, “Am I obligated to commit murder?” they would say, “No, of course not.”
You would answer yes to both, saying that you are obligated to not commit murder by (or according to) some moral standards/theories and are obligated to commit murder by some others.
To most people (who are not familiar with how you are using the language), this would appear contradictory (again: to say that you are obligated both to do and not to do X).
And the second note is that when laypeople say, “No, I am not obligated to commit murder,” you wouldn’t be inclined to say that they are wrong (because you don’t interpret what they are trying to say so uncharitably), but rather would see that clearly they meant something else than the meaning that you explained in the article above that you would assign to these words.
My interpretation of their statement that they are not obligated to commit murder would be (said in one way) that they do not care about any of the moral standards that obligate them to commit murder. Said differently, they are saying that in order to fulfill or achieve their values, people shouldn’t murder others (at least in general), because murdering people would actually be a counter-productive way to cause what they desire to happen to happen.
I hold the same view as yours described here (assuming of course that I understand you correctly, which I believe I do).
FWIW I would label this view “moral anti-realist” rather than “moral realist,” although of course whether it actually qualifies as “anti-realism” or “realism” depends on what one means by those phrases, as you pointed out.
Here are two revealing statements of yours that would have lead me to strongly update my view towards you being a moral anti-realist without having to read your whole article (emphasis added):
(1) “that firm conviction is the “expressive assertivism” we talked about earlier, not a magic force of morality.”
(2) “I disagree that there is One True Moral Standard.” “I disagree that these obligations have some sort of compelling force independent of desire.”
I like this view. I think it agrees with my intuition that morality is just a function of whatever a society has decided it cares about. This can make practical sense in the case of ‘murder’, and perhaps not so much in the case of ‘hands on table’. Of course one might then wonder if they ought to care about things making practical sense, which in turn depends on whether that’s a thing we care about, or whatever underlying(/meta-)goal applies; turtles all the way down.
I like that this perspective takes some of the mysticism out of morality by explicitly noting the associated goals/desires. Also generalising obligations to things you should do according to at least one of every self-consistent moral framework is pretty neat. The latter is mainly playing with definitions obviously, but it makes sense as far as definitions go (whatever that means).
I wonder if there exists some set of obligations that caring about/following maximises CEV. Assuming we care about achieving CEV (by definition we might?), this seems like a strong candidate moral framework for everyone to agree on, if such a thing is at all possible.
Possible problems: 1) current volition is not extrapolated volition, so we may not want to care about what we would want to care about, 2) extrapolated volition of different sentients may not converge (then maybe look for the best approximation?).
I’m not that well read in these issues, so please do tell me if I’m clearly missing something/making an obvious mistake.
Regardless of whether or not moral realism is true, I feel like we should act as though it is (and I would argue many Effective Altruists already do to some extent). Consider the doctor who proclaims that they just don’t value people being healthy, and doesn’t see why they should. All the other doctors would rightly call them crazy and ignore them, because the medical system assumes that we value health. In the same way, the field of ethics came about to (I would argue) try and find the most right thing to do. If an ethicist comes out and says that the most right thing to is to kill whomever you like without justification (ignoring flow on effects of course) we should be able to say they are just crazy. One, because wellbeing is what we have decided to value, and two, because wellbeing is associated with positive brain states, and why value something if it has no link to conscious experience? What would the world be like if we accepted that these people just have different values and ‘who are we to say they are wrong’?
Imagine the Worst Possible World of Sam Harris, full of near-infinite suffering for a near-infinite number of mind states. This is bad, if the word bad means anything at all. If you think this is not bad, then we probably mean different things by ‘bad’. Any step to move away from this is therefore good. There are right and wrong ways to move from the Worst Possible World to the Best Possible World, and to an extent we can determine what these are.
I haven’t fully formed this idea yet, but I’m writing a submission to Essays in Philosophy about this with Robert Farquharson. An older version of our take on this is here: http://www.michaeldello.com/?p=741