hmm, well, this is a good question because it’s something i hadn’t thought out properly.
first a clarification: i don’t necessarily think moral theory is something that can be true (the way moral realists do). i guess i lean more towards some form of constructivism. but realism is the majority view among philosophers (and i think among utilitarians) so i figured it made sense to write under that assumption.
but to answer your question, i think the main thing i can learn or observe about a moral theory that is relevant in determining how plausible it is is the reasons given in support of it—the evidence it’s based on and the inferences made from that evidence. because evaluating those things are based on norms of reasoning, not on moral norms, so it doesn’t seem circular to me. if that makes sense?
Not really. Can you give me an example of a moral theory + evidence / reasons for it, where you can evaluate the evidence / reasons using norms of reasoning and not moral norms?
here are some examples, sticking with utilitarianism:
evidence:
people tend to avoid pain, and when asked say that it’s really bad
people tend to seek out pleasure, and when asked say it’s really great
many other goods can be explained in terms of pain and pleasure
inferences:
pleasure is the fundamental good, and pain is the fundamental bad
we should maximise pleasure and minimise pain
i think the evidence here is observations about human (and animal) behaviour, which can be evaluated according to how well they fit reality. the inferences we can evaluate based on whether we think they really follow from the evidence (using, i dunno, logic and reason and that stuff). i don’t think you need to presuppose a moral philosophy in order to evaluate these items.
If your standard is “explains human (and animal) behavior”, I think you again can’t make moral progress, because you no longer have any reason to deviate from past human behavior. For example, “we should maximize pleasure and minimize pain” seems terrible at explaining observations like slavery, war, torture, etc.
(For more on this point, see this post, particularly the “Mistakes are fundamental” section.)
If your standard is “explains human (and animal) behavior”, I think you again can’t make moral progress, because you no longer have any reason to deviate from past human behavior. For example, “we should maximize pleasure and minimize pain” seems terrible at explaining observations like slavery, war, torture, etc.
“humans seek out pleasure and avoid pain” is universal, so it seems like a good reason to say that pleasure and the avoidance of pain have absolute value. “humans seek to enslave, war and torture” is not universal and so does not seem like a good reason to say that these things have absolute value; and even if it is some weak evidence that these things have value to some people, it is dwarfed by the very strong evidence that their consequences have significant negative value, since nearly everyone tries to avoid being enslaved, tortured, etc.
(caveat: i happen to think value is necessarily relational, but that is perhaps getting too sidetracked.)
hmm, well, this is a good question because it’s something i hadn’t thought out properly.
first a clarification: i don’t necessarily think moral theory is something that can be true (the way moral realists do). i guess i lean more towards some form of constructivism. but realism is the majority view among philosophers (and i think among utilitarians) so i figured it made sense to write under that assumption.
but to answer your question, i think the main thing i can learn or observe about a moral theory that is relevant in determining how plausible it is is the reasons given in support of it—the evidence it’s based on and the inferences made from that evidence. because evaluating those things are based on norms of reasoning, not on moral norms, so it doesn’t seem circular to me. if that makes sense?
Not really. Can you give me an example of a moral theory + evidence / reasons for it, where you can evaluate the evidence / reasons using norms of reasoning and not moral norms?
here are some examples, sticking with utilitarianism:
evidence:
people tend to avoid pain, and when asked say that it’s really bad
people tend to seek out pleasure, and when asked say it’s really great
many other goods can be explained in terms of pain and pleasure
inferences:
pleasure is the fundamental good, and pain is the fundamental bad
we should maximise pleasure and minimise pain
i think the evidence here is observations about human (and animal) behaviour, which can be evaluated according to how well they fit reality. the inferences we can evaluate based on whether we think they really follow from the evidence (using, i dunno, logic and reason and that stuff). i don’t think you need to presuppose a moral philosophy in order to evaluate these items.
If your standard is “explains human (and animal) behavior”, I think you again can’t make moral progress, because you no longer have any reason to deviate from past human behavior. For example, “we should maximize pleasure and minimize pain” seems terrible at explaining observations like slavery, war, torture, etc.
(For more on this point, see this post, particularly the “Mistakes are fundamental” section.)
thanks—i read christiano’s post.
“humans seek out pleasure and avoid pain” is universal, so it seems like a good reason to say that pleasure and the avoidance of pain have absolute value. “humans seek to enslave, war and torture” is not universal and so does not seem like a good reason to say that these things have absolute value; and even if it is some weak evidence that these things have value to some people, it is dwarfed by the very strong evidence that their consequences have significant negative value, since nearly everyone tries to avoid being enslaved, tortured, etc.
(caveat: i happen to think value is necessarily relational, but that is perhaps getting too sidetracked.)