I don’t think there’s a particularly noteworthy consensus about it being bad for other people to be in pain
Sorry, I should’ve been more clear about what I’m referring to. When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry. (To reiterate, I focus on these innate tendencies, because they are what let us establish common reference. Downstream uses of moral and other language are then determined by our shared and personal inductive biases.)
you would be wrong not to give me $10 and would be apt for disapproval if you did not
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering. Of course, when you learn language from your parents they need not and cannot point at your aversions, but you implicitly use these aversions as the best fitting explanation to generalize your parents language. In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain. Compare to a child whose parents’ whose notion of bad includes burning your fingers but only on weekends, she will have more difficulty learning their uses of moral language, because this use does not match our genetic/neurological biases.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him. He may utter moral words but always with entirely different meaning (extension). In slogan form, “There’s no arguing with a madman”. Or take another sort of madman: someone who agrees with you that usually hurting people is wrong, but then remorselessly goes berserk when he sees anyone with a nose of a certain shape. He simply has a different inductive bias (mental condition). If you deny the significance of the consensus I described in the first paragraph, how do you distinguish between these two madmen and more sensible cases of moral disagreement?
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
One can sensibly say “I like/don’t like this pleasant/painful sensation” without thereby saying “It is morally right that you act to promote/alleviate my experience”
Yes, naturally. The reason why the painful sensations matter is that they help us arrive at a shared understanding of the “aptness for disapproval” you describe.
[From DM’s other comment]
Conversely it seems to me that moral discourse is characterised by widespread disagreement i.e. we can sensibly disagree about whether it’s right or wrong to torture
Yes, I agree work has to be done to explain why utilitarianism parallels arithmetic despite apparent differences. I will likely disagree with you in many places, so hopefully I’ll find time to re-read Kripke. I would enjoy talking about it then.
When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that’s just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.
Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn’t seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering.
Just to clarify one thing: when I said that “It is morally right that you give me $10” might communicate (among other things) that you are apt for disapproval if you don’t give me $10 (which is not implied by saying “I desire that you give me $10“), I had in mind something like the following: when I say “It is morally right that you give me $10” this communicates inter alia that I will disapprove of you if you don’t give me $10, that I think think it’s appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don’t etc. Maybe it involves a bunch of other attitudes and practical implications as well. That’s in contrast to me just saying “I desire that you give me $10” which needn’t imply any of the above. That’s what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I’m not sure how you interpreted “apt[ness] for disapproval” but it sounds from your subsequent comments like you think it means something other than what I mean.
I think the fundamental disagreement here is that I don’t think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what “morally wrong” means. We could learn, for example, that “That’s wrong!” expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don’t really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.
In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain
I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that “wrong” refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that “I like this” refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the “wrong” things and what unites the “liked” things and reference to these concrete categories play a central role in fixing the meaning of the terms.
But I don’t think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that “that’s morally wrong” implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas “I don’t like that” doesn’t carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).
Also, I recognise that you might say “I don’t think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions” (or some such). But I don’t think this changes my response, since I don’t think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you’ll be punished and I would disapprove of others who don’t disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren’t implied by mere expressions of desire or distaste etc.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him.
I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don’t think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of “That’s wrong!” was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.
how do you distinguish between these two madmen and more sensible cases of moral disagreement?
I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.
Crucially, I don’t think that, absent moral consensus, moral utterances would reduce to “function[ing] in conversation just as all other preferences do.” Saying “I think it is morally required for you to give me $10″ would still perform a different function than saying “I prefer that you to give me $10” for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I’ll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn’t imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn’t entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.
Thanks for the long reply. I feel like our conversation becomes more meaningful as it goes on.
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point. When I watched Land of Hope and Glory I found seeing the suffering of animals disturbing, and this would obviously be worse if the documentary had people suffering in similar conditions to the animals. I am confident that most people have similar reactions, but if they don’t I would change my view of the above. The most relevant childhood experiences are likely those which involve prolonged pain: a skinned knee, a fever, a burn etc. I think what I’m trying to point at could be described as ‘pointless suffering’. Pain in the context of humor, cheap thrills, couch-viewing etc. is not what I’m referring to.
there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features cf. my original post:
When confronted with such suffering we react sympathetically, experiencing sadness within ourselves. This sadness may be both attributable to a conscious process of building empathy by imagining the others’ experience, or perhaps an involuntary immediate reaction resulting from our neural wiring.
Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often
Let me take this opportunity to emphasize that I agree: The subsequent tendency to disapprove following use of moral language is an important feature of moral language.
that I think others should disapprove of you and I would disapprove of them if they don’t
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB. Here’s a prototypical example: an aunt chastises a mother for failing to stop her husband from striking their child in anger. The aunt does so because she knows the mother cares about her children, and more generally doesn’t want people to be hurt unreasonably. If the mother were one of our madmen from above, then the aunt would find it futile to chastise her. To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
True, the learner merely has to learn that they have within themselves some particular disposition towards the morally wrong cases. These dispositions may be various: aversion to pain, aversion to unfairness, guilt, etc. The learner later finds it useful to continue to use moral language, because others outside of her home share these dispositions to morally wrong cases. To hyperbolize this point: moral language would have a different role if SMB were similar to eye color i.e. usually shared within the family, but diverse outside of the family.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman. E.g. the kind of disagreement I describe in the family with the aunt above.
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.
Yes, I agree. However, cases in which our conversations are founded on SMB have a distinctive character which is of great importance. I agree that the view described in my original post likely becomes less relevant when applied to disagreements across moral cultures i.e. between groups with very different SMB. I’m not particularly bothered by this caveat since most discussion of object-level ethics seems to occur within communities of shared SMB e.g. medical ethics, population ethics, etc.
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point.
Let me note that I agree (and think it’s uncontroversial) that people often have extreme emotional reactions (including moral reactions) to seeing things like people blown to bits in front of them. So this doesn’t seem like a crux in our disagreement (I think everyone, whatever their metaethical position, endorses this point).
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features… Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
OK, so we also agree that people may have a host of innate emotional reactions to things (including, but not limited to valenced emotions).
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB… To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
I think I responded to this point directly in the last paragraph of my reply. In brief: if no-one could ever be brought to share any moral views, this would indeed vitiate a large part (though not all) of the function of moral language. But this doesn’t mean “that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things.” All that is required is “some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.”
To approach this from another angle: suppose people are somewhat capable of being persuaded to share others views and maybe even, in fact, do tend to share some moral views (which I think is obviously actually true), although they may radically disagree to some extent. Now suppose that the meaning of moral language is just something like what I sketched out above (i.e. I disapprove of people who x, I disapprove of those who don’t disapprove of those who x etc.).* In this scenario it seems completely possible for moral language to function even though the meaning of moral terms themselves is (ex hypothesi) not tied up in any way with agreement that certain specific things are morally good/bad.
*As I argued above, I also think that such a language could easily be learned without consensus on certain things being good or bad.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman
cases in which our conversations are founded on SMB have a distinctive character which is of great importance.
Hmm, it sounds like maybe you don’t think that the meaning of moral of moral terms is tied to certain specific things being judged morally good/bad at all, in which case there may be little disagreement regarding this thread of the discussion.
I agree that moral disagreement between people who share some moral presuppositions has something of a distinctive character from discourse between people who don’t share any moral presuppositions. In the real world, of course, there are always some shared background presuppositions (broadly speaking) even if these are not always at all salient to disagreement.
That said, I don’t know whether I endorse your view about the role of the Shared Moral Base. As I noted above, I do think that there are a host of moral reactions which are innate (Moral Foundations, if you will). But I don’t think these or applications of these play an ‘ostensive’ role (I think we have innate dispositions to respond in certain ways intuitively, but our actual judgements and moral theories and concepts get formed in a pretty environmentally and socially contingent way, leading to a lot of fuzziness and indeterminacy). And I don’t privilege these intuitive views as particularly foundational in the philosophical sense (despite the name).
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad (perhaps especially when, as in modern industrialised countries, impure things typically don’t really pose much of a threat). It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. [...] This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing”
Sorry, I should’ve addressed this directly. The SMB-community picture is somewhat misleading. In reality, you likely have partial overlap in SMB and the intersection of your whole community of friends is less (but does include pain aversion). Moral disagreement attains a particular level of meaningfulness when both speakers share SMB relevant to their topic of debate. I now realize that my use of ‘ostensive’ was mistaken. I meant to say, as perhaps has already become clear, that SMB lends substance to moral disagreement. SMB plays a role in defining moral disagreement, but, as you say, SMB likely plays a lesser role when it comes to using moral language outside of disagreement.
It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’. I argue that what we do when disagreeing is emphasizing various parts of SMB to the other. In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning. As described in the OP, for non-philosophers abstruse reasoning only matters as mediated by meta-reactions. In effect, reasoning constraints enter the picture as a subset of our universalizable preferences, but as such there’s no basis for them to override our other object-level universalizable preferences. Of course, I use talk of preferences here loosely; I do believe that these preferences have vague intensities which may sometimes be compared. E.g. someone may feel their meta-reactions particularly strongly and so these preferences may carry more weight than other preferences because of this intensity of feeling.
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
Independently of the meaning of ‘impure’, let me respond to “people routinely overcome and override this basic disposition”: certainly people’s moral beliefs often come into conflict e.g. trolley problems. I would describe most of these cases as having multiple conflicting universalizable preferences in play. Sometimes one of those preferences is a meta-reaction, e.g. ‘call to universality’, and if the meta-reaction is more salient or intense then perhaps it carries more weight than a ‘basic disposition’. Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m afraid now the working week has begun again I’m not going to have so much time to continue responding, but thanks for the discussion.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
I’m thinking of the various things which fall under the Purity/Disgust (or Sanctity/Degradation) foundation in Haidt’s Moral Foundations Theory. This includes a lot of things related to not eating or otherwise exposing yourself to things which elicit disgust, as well as a lot of sexual morality. Rereading the law books of the Bible gives a lot of examples. The sheer prevalence of these concerns in ancient morality, especially as opposed to modern concerns like promoting positive feeling, is also quite telling IMO. For more on the distinctive role of disgust in morality see here or here.
Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing? If so then I’m inclined to agree, but then I don’t think this account implies anything much at the practical level (e.g. how we should think about animals, population ethics etc.).
I argue that what we do when disagreeing is emphasizing various parts of SMB to the other.
I may agree with this if, per my previous comment, SMB is construed very broadly i.e. to mean roughly emphasising or making salient shared moral views (of any kind) to each other and persuading people to adopt new moral views. (See Wittgenstein on conversion for discussion of the latter).
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’… In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning.
I think this may be misconstruing my reference to “abstruse reasoning” in the claim that “It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.” Note that I don’t say anything about abstruse reasoning being “necessary to understand the nature of moral disagreement.”
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable, (I think this would include cases like population ethics and judging that whether animals matter depends on whether they have the right kinds of capacities).
It now sounds like you might think that such reflections are on an “equal footing” with judgments that are more immediately related to basic intuitive responses, in which case there may be little or no remaining disagreement. There may be some residual disagreement if you think that such relatively rarefied reflections can’t count as meta-reflections/legitimate moral reasoning, but I don’t think that is the view which you are defending now. My sense is that more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency, in which case it doesn’t seem to me like any line of moral argument is ruled out or called into question by your metaethical account. That is fine in my view since I think that it’s appropriate that philosophical reflections should ‘leave everything as it is.’
Thanks for the lively discussion! We’ve covered a lot of ground, so I plan to try to condense what was said into a follow-up blog post making similar points as the OP but taking into account all of your clarifications.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing?
‘Meta-reactions’ are the subset of our universalizable preferences which express preferences over other preferences (and/or their relation). What it means to be ‘placed on equal footing’ is that all of these preferences are comparable. Which of them will take precedence in a certain judgement depends on the relative intensity of feeling for each preference. This stands in contrast to views such as total utilitarianism in which certain preferences are considered irrational and are thus overruled independently of the force with which we feel them.
more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency
The key point here is ‘seeking consistency’: my view is that the extent to which consistency constraints are morally relevant is contingent on the individual. Any sort of consistency only carries force insofar as it is one of the given individual’s universalizable preferences. In a way, this view does ‘leave everything as it is’ for non-philosophers’ moral debates. I also have no problem with a population ethicist who sees eir task as finding functions which satisfy certain population ethics intuitions. My view only conflicts with population ethics and animal welfare ethics insofar as ey take eir conclusions as a basis for language policing. E.g. When an ethicist claims eir preferred population axiology has implications on understanding everyday uses of moral language.
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable.
Within my framework we may override disgust responses by e.g. observing that they are less strong than our other responses, or by observing that—unlike our other responses—they have multiple meta-reactions stacked against them (fairness, ‘call to universality’, etc.) and we feel those meta-reactions more strongly. I do not endorse coming up with a theory about moral value and then overriding our disgust responses because of the theoretical elegance or epistemological appeal of that theory. I’m not sure whether you have in mind the former or the latter case?
Sorry, I should’ve been more clear about what I’m referring to. When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry. (To reiterate, I focus on these innate tendencies, because they are what let us establish common reference. Downstream uses of moral and other language are then determined by our shared and personal inductive biases.)
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering. Of course, when you learn language from your parents they need not and cannot point at your aversions, but you implicitly use these aversions as the best fitting explanation to generalize your parents language. In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain. Compare to a child whose parents’ whose notion of bad includes burning your fingers but only on weekends, she will have more difficulty learning their uses of moral language, because this use does not match our genetic/neurological biases.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him. He may utter moral words but always with entirely different meaning (extension). In slogan form, “There’s no arguing with a madman”. Or take another sort of madman: someone who agrees with you that usually hurting people is wrong, but then remorselessly goes berserk when he sees anyone with a nose of a certain shape. He simply has a different inductive bias (mental condition). If you deny the significance of the consensus I described in the first paragraph, how do you distinguish between these two madmen and more sensible cases of moral disagreement?
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
Yes, naturally. The reason why the painful sensations matter is that they help us arrive at a shared understanding of the “aptness for disapproval” you describe.
[From DM’s other comment]
Yes, I agree work has to be done to explain why utilitarianism parallels arithmetic despite apparent differences. I will likely disagree with you in many places, so hopefully I’ll find time to re-read Kripke. I would enjoy talking about it then.
Apologies in advance for long reply.
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that’s just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.
Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn’t seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.
Just to clarify one thing: when I said that “It is morally right that you give me $10” might communicate (among other things) that you are apt for disapproval if you don’t give me $10 (which is not implied by saying “I desire that you give me $10“), I had in mind something like the following: when I say “It is morally right that you give me $10” this communicates inter alia that I will disapprove of you if you don’t give me $10, that I think think it’s appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don’t etc. Maybe it involves a bunch of other attitudes and practical implications as well. That’s in contrast to me just saying “I desire that you give me $10” which needn’t imply any of the above. That’s what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I’m not sure how you interpreted “apt[ness] for disapproval” but it sounds from your subsequent comments like you think it means something other than what I mean.
I think the fundamental disagreement here is that I don’t think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what “morally wrong” means. We could learn, for example, that “That’s wrong!” expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don’t really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.
I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that “wrong” refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that “I like this” refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the “wrong” things and what unites the “liked” things and reference to these concrete categories play a central role in fixing the meaning of the terms.
But I don’t think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that “that’s morally wrong” implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas “I don’t like that” doesn’t carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).
Also, I recognise that you might say “I don’t think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions” (or some such). But I don’t think this changes my response, since I don’t think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you’ll be punished and I would disapprove of others who don’t disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren’t implied by mere expressions of desire or distaste etc.
I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don’t think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of “That’s wrong!” was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.
I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).
I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.
Crucially, I don’t think that, absent moral consensus, moral utterances would reduce to “function[ing] in conversation just as all other preferences do.” Saying “I think it is morally required for you to give me $10″ would still perform a different function than saying “I prefer that you to give me $10” for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I’ll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn’t imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn’t entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.
Thanks for the long reply. I feel like our conversation becomes more meaningful as it goes on.
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point. When I watched Land of Hope and Glory I found seeing the suffering of animals disturbing, and this would obviously be worse if the documentary had people suffering in similar conditions to the animals. I am confident that most people have similar reactions, but if they don’t I would change my view of the above. The most relevant childhood experiences are likely those which involve prolonged pain: a skinned knee, a fever, a burn etc. I think what I’m trying to point at could be described as ‘pointless suffering’. Pain in the context of humor, cheap thrills, couch-viewing etc. is not what I’m referring to.
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features cf. my original post:
Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
Let me take this opportunity to emphasize that I agree: The subsequent tendency to disapprove following use of moral language is an important feature of moral language.
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB. Here’s a prototypical example: an aunt chastises a mother for failing to stop her husband from striking their child in anger. The aunt does so because she knows the mother cares about her children, and more generally doesn’t want people to be hurt unreasonably. If the mother were one of our madmen from above, then the aunt would find it futile to chastise her. To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
True, the learner merely has to learn that they have within themselves some particular disposition towards the morally wrong cases. These dispositions may be various: aversion to pain, aversion to unfairness, guilt, etc. The learner later finds it useful to continue to use moral language, because others outside of her home share these dispositions to morally wrong cases. To hyperbolize this point: moral language would have a different role if SMB were similar to eye color i.e. usually shared within the family, but diverse outside of the family.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman. E.g. the kind of disagreement I describe in the family with the aunt above.
Yes, I agree. However, cases in which our conversations are founded on SMB have a distinctive character which is of great importance. I agree that the view described in my original post likely becomes less relevant when applied to disagreements across moral cultures i.e. between groups with very different SMB. I’m not particularly bothered by this caveat since most discussion of object-level ethics seems to occur within communities of shared SMB e.g. medical ethics, population ethics, etc.
Let me note that I agree (and think it’s uncontroversial) that people often have extreme emotional reactions (including moral reactions) to seeing things like people blown to bits in front of them. So this doesn’t seem like a crux in our disagreement (I think everyone, whatever their metaethical position, endorses this point).
OK, so we also agree that people may have a host of innate emotional reactions to things (including, but not limited to valenced emotions).
I think I responded to this point directly in the last paragraph of my reply. In brief: if no-one could ever be brought to share any moral views, this would indeed vitiate a large part (though not all) of the function of moral language. But this doesn’t mean “that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things.” All that is required is “some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.”
To approach this from another angle: suppose people are somewhat capable of being persuaded to share others views and maybe even, in fact, do tend to share some moral views (which I think is obviously actually true), although they may radically disagree to some extent. Now suppose that the meaning of moral language is just something like what I sketched out above (i.e. I disapprove of people who x, I disapprove of those who don’t disapprove of those who x etc.).* In this scenario it seems completely possible for moral language to function even though the meaning of moral terms themselves is (ex hypothesi) not tied up in any way with agreement that certain specific things are morally good/bad.
*As I argued above, I also think that such a language could easily be learned without consensus on certain things being good or bad.
Hmm, it sounds like maybe you don’t think that the meaning of moral of moral terms is tied to certain specific things being judged morally good/bad at all, in which case there may be little disagreement regarding this thread of the discussion.
I agree that moral disagreement between people who share some moral presuppositions has something of a distinctive character from discourse between people who don’t share any moral presuppositions. In the real world, of course, there are always some shared background presuppositions (broadly speaking) even if these are not always at all salient to disagreement.
That said, I don’t know whether I endorse your view about the role of the Shared Moral Base. As I noted above, I do think that there are a host of moral reactions which are innate (Moral Foundations, if you will). But I don’t think these or applications of these play an ‘ostensive’ role (I think we have innate dispositions to respond in certain ways intuitively, but our actual judgements and moral theories and concepts get formed in a pretty environmentally and socially contingent way, leading to a lot of fuzziness and indeterminacy). And I don’t privilege these intuitive views as particularly foundational in the philosophical sense (despite the name).
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad (perhaps especially when, as in modern industrialised countries, impure things typically don’t really pose much of a threat). It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
[From a previous DM comment]
Sorry, I should’ve addressed this directly. The SMB-community picture is somewhat misleading. In reality, you likely have partial overlap in SMB and the intersection of your whole community of friends is less (but does include pain aversion). Moral disagreement attains a particular level of meaningfulness when both speakers share SMB relevant to their topic of debate. I now realize that my use of ‘ostensive’ was mistaken. I meant to say, as perhaps has already become clear, that SMB lends substance to moral disagreement. SMB plays a role in defining moral disagreement, but, as you say, SMB likely plays a lesser role when it comes to using moral language outside of disagreement.
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’. I argue that what we do when disagreeing is emphasizing various parts of SMB to the other. In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning. As described in the OP, for non-philosophers abstruse reasoning only matters as mediated by meta-reactions. In effect, reasoning constraints enter the picture as a subset of our universalizable preferences, but as such there’s no basis for them to override our other object-level universalizable preferences. Of course, I use talk of preferences here loosely; I do believe that these preferences have vague intensities which may sometimes be compared. E.g. someone may feel their meta-reactions particularly strongly and so these preferences may carry more weight than other preferences because of this intensity of feeling.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
Independently of the meaning of ‘impure’, let me respond to “people routinely overcome and override this basic disposition”: certainly people’s moral beliefs often come into conflict e.g. trolley problems. I would describe most of these cases as having multiple conflicting universalizable preferences in play. Sometimes one of those preferences is a meta-reaction, e.g. ‘call to universality’, and if the meta-reaction is more salient or intense then perhaps it carries more weight than a ‘basic disposition’. Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m afraid now the working week has begun again I’m not going to have so much time to continue responding, but thanks for the discussion.
I’m thinking of the various things which fall under the Purity/Disgust (or Sanctity/Degradation) foundation in Haidt’s Moral Foundations Theory. This includes a lot of things related to not eating or otherwise exposing yourself to things which elicit disgust, as well as a lot of sexual morality. Rereading the law books of the Bible gives a lot of examples. The sheer prevalence of these concerns in ancient morality, especially as opposed to modern concerns like promoting positive feeling, is also quite telling IMO. For more on the distinctive role of disgust in morality see here or here.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing? If so then I’m inclined to agree, but then I don’t think this account implies anything much at the practical level (e.g. how we should think about animals, population ethics etc.).
I may agree with this if, per my previous comment, SMB is construed very broadly i.e. to mean roughly emphasising or making salient shared moral views (of any kind) to each other and persuading people to adopt new moral views. (See Wittgenstein on conversion for discussion of the latter).
I think this may be misconstruing my reference to “abstruse reasoning” in the claim that “It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.” Note that I don’t say anything about abstruse reasoning being “necessary to understand the nature of moral disagreement.”
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable, (I think this would include cases like population ethics and judging that whether animals matter depends on whether they have the right kinds of capacities).
It now sounds like you might think that such reflections are on an “equal footing” with judgments that are more immediately related to basic intuitive responses, in which case there may be little or no remaining disagreement. There may be some residual disagreement if you think that such relatively rarefied reflections can’t count as meta-reflections/legitimate moral reasoning, but I don’t think that is the view which you are defending now. My sense is that more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency, in which case it doesn’t seem to me like any line of moral argument is ruled out or called into question by your metaethical account. That is fine in my view since I think that it’s appropriate that philosophical reflections should ‘leave everything as it is.’
Thanks for the lively discussion! We’ve covered a lot of ground, so I plan to try to condense what was said into a follow-up blog post making similar points as the OP but taking into account all of your clarifications.
‘Meta-reactions’ are the subset of our universalizable preferences which express preferences over other preferences (and/or their relation). What it means to be ‘placed on equal footing’ is that all of these preferences are comparable. Which of them will take precedence in a certain judgement depends on the relative intensity of feeling for each preference. This stands in contrast to views such as total utilitarianism in which certain preferences are considered irrational and are thus overruled independently of the force with which we feel them.
The key point here is ‘seeking consistency’: my view is that the extent to which consistency constraints are morally relevant is contingent on the individual. Any sort of consistency only carries force insofar as it is one of the given individual’s universalizable preferences. In a way, this view does ‘leave everything as it is’ for non-philosophers’ moral debates. I also have no problem with a population ethicist who sees eir task as finding functions which satisfy certain population ethics intuitions. My view only conflicts with population ethics and animal welfare ethics insofar as ey take eir conclusions as a basis for language policing. E.g. When an ethicist claims eir preferred population axiology has implications on understanding everyday uses of moral language.
Within my framework we may override disgust responses by e.g. observing that they are less strong than our other responses, or by observing that—unlike our other responses—they have multiple meta-reactions stacked against them (fairness, ‘call to universality’, etc.) and we feel those meta-reactions more strongly. I do not endorse coming up with a theory about moral value and then overriding our disgust responses because of the theoretical elegance or epistemological appeal of that theory. I’m not sure whether you have in mind the former or the latter case?