Why should we care whether or not moral realism is true?
I plan to address this more in a future post, but the short answer is this that for some ways in which moral realism has been defined, it really doesn’t matter (much). But there are some versions of moral realism that would “change the game” for those people who currently reject them. And vice-versa, if one currently endorses a view that corresponds to the two versions of “strong moral realism” described in the last section of my post, one’s priorities could change noticeably if one changes one’s mind towards anti-realism.
What do you think are the implications of moral anti-realism for choosing altruistic activities?
It’s hard to summarize this succinctly because for most of the things that are straightforwardly important under moral realism (such as moral uncertainty or deferring judgment to future people who are more knowledgeable about morality), you can also make good arguments in favor of them going from anti-realist premises. Some quick thoughts:
– The main difference is that things become more “messy” with anti-realism.
– I think anti-realists should, all else equal, be more reluctant to engage in “bullet biting” where you abandon some of your moral intuitions in favor of making your moral view “simpler” or “more elegant.” The simplicity/elegance appeal is that if you have a view with many parameters that are fine-tuned for your personal intuitions, it seems extremely unlikely that other people would come up with the same parameters if they only thought about morality more. Moral realists may think that the correct answer to morality is one that everyone who is knowledgeable enough would endorse, whereas anti-realists may consider this a potentially impossible demand and therefore place more weight on finding something that feels very intuitively compelling on the individual level. Having said that, I think there are a number of arguments why even an anti-realist might want to adopt moral views that are “simple and elegant.” For instance, people may care about doing something meaningful that is “greater than their own petty little intuitions” – I think this is an intuition that we can try to cash out somehow even if moral realism turns out to be false (it’s just that it can be cashed out in different ways).
– “Moral uncertainty” works differently under anti-realism, because you have to say what you are uncertain about (it cannot be the one true morality because anti-realism says there is no such thing). One can be uncertain about what one would value after moral reflection under ideal conditions. This kind of “valuing moral reflection” seems like a very useful anti-realist alternative to moral uncertainty. The difference is that “valuing reflection” may be underdefined, so anti-realists have to think about how to distinguish having underdefined values from being uncertain about their values. This part can get tricky.
– There was recently a discussion about “goal drift” in the EA forum. I think it’s a bigger problem with anti-realism all else equal (unless one’s anti-realist moral view is egoism-related.) But again, there are considerations that go into both directions. :)
I plan to address this more in a future post, but the short answer is this that for some ways in which moral realism has been defined, it really doesn’t matter (much). But there are some versions of moral realism that would “change the game” for those people who currently reject them. And vice-versa, if one currently endorses a view that corresponds to the two versions of “strong moral realism” described in the last section of my post, one’s priorities could change noticeably if one changes one’s mind towards anti-realism.
It’s hard to summarize this succinctly because for most of the things that are straightforwardly important under moral realism (such as moral uncertainty or deferring judgment to future people who are more knowledgeable about morality), you can also make good arguments in favor of them going from anti-realist premises. Some quick thoughts:
– The main difference is that things become more “messy” with anti-realism.
– I think anti-realists should, all else equal, be more reluctant to engage in “bullet biting” where you abandon some of your moral intuitions in favor of making your moral view “simpler” or “more elegant.” The simplicity/elegance appeal is that if you have a view with many parameters that are fine-tuned for your personal intuitions, it seems extremely unlikely that other people would come up with the same parameters if they only thought about morality more. Moral realists may think that the correct answer to morality is one that everyone who is knowledgeable enough would endorse, whereas anti-realists may consider this a potentially impossible demand and therefore place more weight on finding something that feels very intuitively compelling on the individual level. Having said that, I think there are a number of arguments why even an anti-realist might want to adopt moral views that are “simple and elegant.” For instance, people may care about doing something meaningful that is “greater than their own petty little intuitions” – I think this is an intuition that we can try to cash out somehow even if moral realism turns out to be false (it’s just that it can be cashed out in different ways).
– “Moral uncertainty” works differently under anti-realism, because you have to say what you are uncertain about (it cannot be the one true morality because anti-realism says there is no such thing). One can be uncertain about what one would value after moral reflection under ideal conditions. This kind of “valuing moral reflection” seems like a very useful anti-realist alternative to moral uncertainty. The difference is that “valuing reflection” may be underdefined, so anti-realists have to think about how to distinguish having underdefined values from being uncertain about their values. This part can get tricky.
– There was recently a discussion about “goal drift” in the EA forum. I think it’s a bigger problem with anti-realism all else equal (unless one’s anti-realist moral view is egoism-related.) But again, there are considerations that go into both directions. :)