The learned meaning of moral language refers to our recollection/reaction to experiences. These reactions include approval, preferences and beliefs… Preferences enter the picture when we try to extend our use of moral language beyond the simple cases learned as a child. When we try to compare two things that are apparently both bad we might arrive at a preference for one over the other, and in that case the preference precedes the statement of approval/disapproval.
Thanks for the reply. I guess I’m still confused about what specific attitudes you see as involved in moral judgments, whether approval, preferences, beliefs or some more complex combination of these etc. It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.
It does sound though, from your reply, that you do think that moral language exclusively concerns experiences (and our evaluations of experiences). If so, that doesn’t seem right to me. For one, it seems that the vast majority of people (outside of welfarist EA circles) don’t exclusively or even primarily make moral judgements or utterances which are about the goodness or badness of experiences (even indirectly). It also doesn’t seem to me like the kind of simple moral utterances which ex hypothesi train people in the use of moral language at an early age primarily concern experiences and their badness (or preferences for that matter). It seems equally if not more plausible to speculate that such utterances typically involve injunctions (with the threat of punishment and so on).
Thanks for bringing up the X,Y,Z point; I initially had some discussion of this point, but I wasn’t happy with my exposition, so I removed it. Let me try again: In cases when there are multiple moral actors and patients there are two sets of considerations. First, the inside view, how would you react as X and Y. Second, the outside view, how would you react as person W who observes X and Y. It seems to me that we learn moral language as a fuzzy mixture of these two with the first usually being primary.
Thanks for addressing this. This still isn’t quite clear to me i.e. what exactly is meant by ‘how would you react as person W who observes X and Y’? What conditions of W observing X and Y are required?. For example, does it only specifically refer to how I would react if I were directly observing an act of torture in the room or does it permit broader ‘observations’ i.e. I can observe that there is such-and-such level of inequality in the distribution of income in a society. The more restrictive definitions don’t seem adequate to me to capture how we actually use moral language, but the more permissive ones, which are more adequate, don’t seem to suffice to rule out me making judgements about the repugnant conclusion and so on.
Much as with population ethics, I suspect this endeavor should be seen as… beyond the boundary of where our use of language remains well-defined.
I agree that answers to population ethics aren’t directly entailed by the definition of moral terms. But I’m not sure why we should expect any substantive normative answers to be implied by the meaning of moral language. Moral terms might mean “I endorse x”, but any number of different considerations (including population ethics, facts about neurobiology) might be relevant to whether I endorse x (especially so if you allow that I might have all kinds of meta-reactions about whether my reactions are based on appropriate considerations etc.).
Here’s another way of explaining where I’m coming from. The meaning of our words is set by ostensive definition plus our inductive bias. E.g. when defining red and purple we agree upon some prototypical cases of red and purple by perhaps pointing at red and saying ‘red’. Then upon seeing maroon for the first time, we call it red because our brains process maroon in a similar way to how they process red. (Incidentally, the first part—pointing at red—is also only meaningful because we share inductive biases around pointing and object boundaries.) Of course in some lucky cases, e.g. ‘water’, ‘one’, etc., a scientific or formal definition appears coextensive with the definition and so is preferred for some purposes.
As another example take durian. Imagine you are trying to explain what the word tasty means and so you feed someone some things that are tasty to you e.g. candy and durian. Unfortunately people have very different reactions to durian, so it would not be a good idea to use durian to try to define ‘tasty’. In fact, if all the human race ate was durian, we could not use the word tasty in the same way. In a world with only one food and in which people randomly liked or disliked that food, a word similar to ‘tasty’ would describe people (and their reactions) not the food itself.
Returning to moral language, we almost uniformly agree about the experience of tripping and skinning your knee. This lets moral language get off the ground, and puts us in our world as opposed to the ‘durian only moral world’. There are some examples of phenomena over which we disagree: perhaps inegalitarian processes are one. Imagine a wealthy individual decides to donate her money to the townspeople, but distributes her wealth based on an apparently arbitrary 10 second interview with each townsperson. Perhaps some people react negatively, feeling displeasure and disgust when hearing about this behavior, whereas others see this behavior as just as good as if she had uniformly distributed the wealth. This connects with what I was saying above:
Sometimes there remains disagreement, and I think you could explain this by saying our use of moral language has two levels: the individual and the community. In enough cases to achieve shared reference, the community agrees (because their simulations match up adequately) but in many, perhaps most, cases there is no consensus.
I privilege uses of moral language as applied to experiences and in particular pain/pleasure because these are the central cases over which there is agreement, and from which the other uses of moral language flow. There’s considerable variance in our inductive biases, and so perhaps for some people the most natural way to extend uses of moral language from its ostensive childhood basis includes inegalitarian processes. Nevertheless inegalitarian processes cannot be seen as the basis for moral language. That would be like claiming the experience of eating durian can be used to define ‘tasty’. I do agree that injunctions may perhaps be the first use we learn of ‘bad’, but the use of ‘bad’ as part of moral language necessarily connects with its use in referring to pain and pleasure, otherwise it would be indistinguishable from expressions of desire/threats on the part of the speaker.
>I privilege uses of moral language as applied to experiences and in particular pain/pleasure because these are the central cases over which there is agreement, and from which the other uses of moral language flow… I do agree that injunctions may perhaps be the first use we learn of ‘bad’, but the use of ‘bad’ as part of moral language necessarily connects with its use in referring to pain and pleasure, otherwise it would be indistinguishable from expressions of desire/threats on the part of the speaker.
OK, on a concrete level, I think we just clearly just disagree about how central references to pleasure and pain are in moral language or how necessary they are. I don’t think they are particularly central, or even that there is much more consensus about the moral badness of pain/goodness of pleasure than about other issues (e.g. stealing others’ property, lying, loyalty/betrayal).
It also sounds like you think that for us to learn the meaning of moral language there needs to be broad consensus about the goodness/badness of specific things (e.g. pleasure/pain). I don’t think this is so. Take the tastiness example: we don’t need people to agree even slightly about whether chocolate/durian are tasty or yucky to learn the meanings of terms. We can observe that when people say chocolate/durian is tasty they go “mmm”, display characteristic facial expressions and eat more of it and seek to acquire more in the future, whereas when they say chocolate/durian is yucky they say “eugh” display other characteristic facial expressions, stop eating it and show disinterest in acquiring more in the future. We don’t need any agreement at all, as far as I can tell, about which specific things are tasty or yucky to learn the meaning of the terms. Likewise with moral language, I don’t think we broadly need widespread agreement about whether specific things are good/bad to learn that if someone says something is “bad” this means they don’t want us to do it, they disapprove of it and we will be punished if we do it etc. Generally I don’t think there’s much connection between the meaning of moral terms and specific things being good or bad: this is what I mean when I said “But I’m not sure why we should expect any substantive normative answers [i.e. specific things being good or bad on the first order level] to be implied by the meaning of moral language”- nothing to do with a particular conception of “normativity.”
Thanks for the clarification, this certainly helps us get more concrete.
We don’t need people to agree even slightly about whether chocolate/durian are tasty or yucky to learn the meanings of terms.
I agree that I was exaggerating my case. In durian-type-food-only worlds we would merely no longer expect ‘X is tasty’ to convey information to the listener about whether she/he should eat it. This difference does the work in the analogy with morality. Moral language is distinct from expression of other preferences in that we expect morality-based talk to be somehow more universal instead of merely expressing our personal preference.
even that there is [not] much more consensus about the moral badness of pain/goodness of pleasure than about other issues
I believe that we have much greater overlap in our emotional reaction to experiencing certain events e.g. being hit, and we have much greater overlap in our emotional reaction to witnessing certain painful events e.g. seeing someone lose their child to an explosion. Perhaps you don’t want to use the word consensus to describe this phenomenon? Or else you think these sorts of universally shared reactions are unimportant to how we learn moral language?
Likewise with moral language, I don’t think we broadly need widespread agreement about whether specific things are good/bad to learn that if someone says something is “bad” this means they don’t want us to do it, they disapprove of it and we will be punished if we do it etc.
The way you seem to be describing moral language, I’m not clear on how it is distinct from desire and other preferences? If we did not have shared aversions to pain, and a shared aversion to seeing someone in pain, then moral language would no longer be distinguishable from talk of desire. I suspect you again disagree here, so perhaps you could clarify how, on your account, we learn to distinguish moral injunctions from personal preference based injunctions?
JP: >I believe that we have much greater overlap in our emotional reaction to experiencing certain events e.g. being hit, and we have much greater overlap in our emotional reaction to witnessing certain painful events e.g. seeing someone lose their child to an explosion.
I agree individuals tend to share an aversion to themselves being in pain. I don’t think there’s a particularly noteworthy consensus about it being bad for other people to be in pain or that it’s good for other people to have more pleasure. People routinely seem to think that it’s good for others to suffer and be indifferent about others experiencing more pleasure. People sometimes try to argue that people really only want people to suffer in order to reduce suffering, for example, but this doesn’t strike me as particularly plausible or as how people characterise their own views when asked. So valenced experience doesn’t strike me as having a particularly central place in ordinary moral psychology IMO.
>I’m not clear on how it is distinct from desire and other preferences? If we did not have shared aversions to pain, and a shared aversion to seeing someone in pain, then moral language would no longer be distinguishable from talk of desire. I suspect you again disagree here, so perhaps you could clarify how, on your account, we learn to distinguish moral injunctions from personal preference based injunctions?
Sure, I just think that moral language differs from desire-talk in various ways unrelated to the specific objects under discussion, i.e. they express different attitudes and perform different functions. For example, if I say “I desire that you give me $10” merely communicates that I would like you to give me $10, there’s no implication that you would be apt for disapproval if you didn’t. But if I say “It is morally right that you give me $10″ this communicates that you would be wrong not to give me $10 and would be apt for disapproval if you did not. (I’m not committed to this particular analysis of the meaning of moral terms of course, this is just an example). I think this applies even if we’re referring to pleasure/pain. One can sensibly say “I like/don’t like this pleasant/painful sensation” without thereby saying “It is morally right that you act to promote/alleviate my experience” or one could say “It is/is not morally right that you act to promote/alleviate my experience.”
I don’t think there’s a particularly noteworthy consensus about it being bad for other people to be in pain
Sorry, I should’ve been more clear about what I’m referring to. When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry. (To reiterate, I focus on these innate tendencies, because they are what let us establish common reference. Downstream uses of moral and other language are then determined by our shared and personal inductive biases.)
you would be wrong not to give me $10 and would be apt for disapproval if you did not
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering. Of course, when you learn language from your parents they need not and cannot point at your aversions, but you implicitly use these aversions as the best fitting explanation to generalize your parents language. In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain. Compare to a child whose parents’ whose notion of bad includes burning your fingers but only on weekends, she will have more difficulty learning their uses of moral language, because this use does not match our genetic/neurological biases.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him. He may utter moral words but always with entirely different meaning (extension). In slogan form, “There’s no arguing with a madman”. Or take another sort of madman: someone who agrees with you that usually hurting people is wrong, but then remorselessly goes berserk when he sees anyone with a nose of a certain shape. He simply has a different inductive bias (mental condition). If you deny the significance of the consensus I described in the first paragraph, how do you distinguish between these two madmen and more sensible cases of moral disagreement?
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
One can sensibly say “I like/don’t like this pleasant/painful sensation” without thereby saying “It is morally right that you act to promote/alleviate my experience”
Yes, naturally. The reason why the painful sensations matter is that they help us arrive at a shared understanding of the “aptness for disapproval” you describe.
[From DM’s other comment]
Conversely it seems to me that moral discourse is characterised by widespread disagreement i.e. we can sensibly disagree about whether it’s right or wrong to torture
Yes, I agree work has to be done to explain why utilitarianism parallels arithmetic despite apparent differences. I will likely disagree with you in many places, so hopefully I’ll find time to re-read Kripke. I would enjoy talking about it then.
When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that’s just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.
Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn’t seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering.
Just to clarify one thing: when I said that “It is morally right that you give me $10” might communicate (among other things) that you are apt for disapproval if you don’t give me $10 (which is not implied by saying “I desire that you give me $10“), I had in mind something like the following: when I say “It is morally right that you give me $10” this communicates inter alia that I will disapprove of you if you don’t give me $10, that I think think it’s appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don’t etc. Maybe it involves a bunch of other attitudes and practical implications as well. That’s in contrast to me just saying “I desire that you give me $10” which needn’t imply any of the above. That’s what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I’m not sure how you interpreted “apt[ness] for disapproval” but it sounds from your subsequent comments like you think it means something other than what I mean.
I think the fundamental disagreement here is that I don’t think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what “morally wrong” means. We could learn, for example, that “That’s wrong!” expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don’t really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.
In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain
I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that “wrong” refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that “I like this” refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the “wrong” things and what unites the “liked” things and reference to these concrete categories play a central role in fixing the meaning of the terms.
But I don’t think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that “that’s morally wrong” implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas “I don’t like that” doesn’t carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).
Also, I recognise that you might say “I don’t think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions” (or some such). But I don’t think this changes my response, since I don’t think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you’ll be punished and I would disapprove of others who don’t disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren’t implied by mere expressions of desire or distaste etc.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him.
I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don’t think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of “That’s wrong!” was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.
how do you distinguish between these two madmen and more sensible cases of moral disagreement?
I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.
Crucially, I don’t think that, absent moral consensus, moral utterances would reduce to “function[ing] in conversation just as all other preferences do.” Saying “I think it is morally required for you to give me $10″ would still perform a different function than saying “I prefer that you to give me $10” for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I’ll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn’t imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn’t entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.
Thanks for the long reply. I feel like our conversation becomes more meaningful as it goes on.
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point. When I watched Land of Hope and Glory I found seeing the suffering of animals disturbing, and this would obviously be worse if the documentary had people suffering in similar conditions to the animals. I am confident that most people have similar reactions, but if they don’t I would change my view of the above. The most relevant childhood experiences are likely those which involve prolonged pain: a skinned knee, a fever, a burn etc. I think what I’m trying to point at could be described as ‘pointless suffering’. Pain in the context of humor, cheap thrills, couch-viewing etc. is not what I’m referring to.
there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features cf. my original post:
When confronted with such suffering we react sympathetically, experiencing sadness within ourselves. This sadness may be both attributable to a conscious process of building empathy by imagining the others’ experience, or perhaps an involuntary immediate reaction resulting from our neural wiring.
Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often
Let me take this opportunity to emphasize that I agree: The subsequent tendency to disapprove following use of moral language is an important feature of moral language.
that I think others should disapprove of you and I would disapprove of them if they don’t
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB. Here’s a prototypical example: an aunt chastises a mother for failing to stop her husband from striking their child in anger. The aunt does so because she knows the mother cares about her children, and more generally doesn’t want people to be hurt unreasonably. If the mother were one of our madmen from above, then the aunt would find it futile to chastise her. To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
True, the learner merely has to learn that they have within themselves some particular disposition towards the morally wrong cases. These dispositions may be various: aversion to pain, aversion to unfairness, guilt, etc. The learner later finds it useful to continue to use moral language, because others outside of her home share these dispositions to morally wrong cases. To hyperbolize this point: moral language would have a different role if SMB were similar to eye color i.e. usually shared within the family, but diverse outside of the family.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman. E.g. the kind of disagreement I describe in the family with the aunt above.
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.
Yes, I agree. However, cases in which our conversations are founded on SMB have a distinctive character which is of great importance. I agree that the view described in my original post likely becomes less relevant when applied to disagreements across moral cultures i.e. between groups with very different SMB. I’m not particularly bothered by this caveat since most discussion of object-level ethics seems to occur within communities of shared SMB e.g. medical ethics, population ethics, etc.
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point.
Let me note that I agree (and think it’s uncontroversial) that people often have extreme emotional reactions (including moral reactions) to seeing things like people blown to bits in front of them. So this doesn’t seem like a crux in our disagreement (I think everyone, whatever their metaethical position, endorses this point).
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features… Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
OK, so we also agree that people may have a host of innate emotional reactions to things (including, but not limited to valenced emotions).
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB… To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
I think I responded to this point directly in the last paragraph of my reply. In brief: if no-one could ever be brought to share any moral views, this would indeed vitiate a large part (though not all) of the function of moral language. But this doesn’t mean “that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things.” All that is required is “some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.”
To approach this from another angle: suppose people are somewhat capable of being persuaded to share others views and maybe even, in fact, do tend to share some moral views (which I think is obviously actually true), although they may radically disagree to some extent. Now suppose that the meaning of moral language is just something like what I sketched out above (i.e. I disapprove of people who x, I disapprove of those who don’t disapprove of those who x etc.).* In this scenario it seems completely possible for moral language to function even though the meaning of moral terms themselves is (ex hypothesi) not tied up in any way with agreement that certain specific things are morally good/bad.
*As I argued above, I also think that such a language could easily be learned without consensus on certain things being good or bad.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman
cases in which our conversations are founded on SMB have a distinctive character which is of great importance.
Hmm, it sounds like maybe you don’t think that the meaning of moral of moral terms is tied to certain specific things being judged morally good/bad at all, in which case there may be little disagreement regarding this thread of the discussion.
I agree that moral disagreement between people who share some moral presuppositions has something of a distinctive character from discourse between people who don’t share any moral presuppositions. In the real world, of course, there are always some shared background presuppositions (broadly speaking) even if these are not always at all salient to disagreement.
That said, I don’t know whether I endorse your view about the role of the Shared Moral Base. As I noted above, I do think that there are a host of moral reactions which are innate (Moral Foundations, if you will). But I don’t think these or applications of these play an ‘ostensive’ role (I think we have innate dispositions to respond in certain ways intuitively, but our actual judgements and moral theories and concepts get formed in a pretty environmentally and socially contingent way, leading to a lot of fuzziness and indeterminacy). And I don’t privilege these intuitive views as particularly foundational in the philosophical sense (despite the name).
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad (perhaps especially when, as in modern industrialised countries, impure things typically don’t really pose much of a threat). It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. [...] This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing”
Sorry, I should’ve addressed this directly. The SMB-community picture is somewhat misleading. In reality, you likely have partial overlap in SMB and the intersection of your whole community of friends is less (but does include pain aversion). Moral disagreement attains a particular level of meaningfulness when both speakers share SMB relevant to their topic of debate. I now realize that my use of ‘ostensive’ was mistaken. I meant to say, as perhaps has already become clear, that SMB lends substance to moral disagreement. SMB plays a role in defining moral disagreement, but, as you say, SMB likely plays a lesser role when it comes to using moral language outside of disagreement.
It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’. I argue that what we do when disagreeing is emphasizing various parts of SMB to the other. In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning. As described in the OP, for non-philosophers abstruse reasoning only matters as mediated by meta-reactions. In effect, reasoning constraints enter the picture as a subset of our universalizable preferences, but as such there’s no basis for them to override our other object-level universalizable preferences. Of course, I use talk of preferences here loosely; I do believe that these preferences have vague intensities which may sometimes be compared. E.g. someone may feel their meta-reactions particularly strongly and so these preferences may carry more weight than other preferences because of this intensity of feeling.
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
Independently of the meaning of ‘impure’, let me respond to “people routinely overcome and override this basic disposition”: certainly people’s moral beliefs often come into conflict e.g. trolley problems. I would describe most of these cases as having multiple conflicting universalizable preferences in play. Sometimes one of those preferences is a meta-reaction, e.g. ‘call to universality’, and if the meta-reaction is more salient or intense then perhaps it carries more weight than a ‘basic disposition’. Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m afraid now the working week has begun again I’m not going to have so much time to continue responding, but thanks for the discussion.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
I’m thinking of the various things which fall under the Purity/Disgust (or Sanctity/Degradation) foundation in Haidt’s Moral Foundations Theory. This includes a lot of things related to not eating or otherwise exposing yourself to things which elicit disgust, as well as a lot of sexual morality. Rereading the law books of the Bible gives a lot of examples. The sheer prevalence of these concerns in ancient morality, especially as opposed to modern concerns like promoting positive feeling, is also quite telling IMO. For more on the distinctive role of disgust in morality see here or here.
Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing? If so then I’m inclined to agree, but then I don’t think this account implies anything much at the practical level (e.g. how we should think about animals, population ethics etc.).
I argue that what we do when disagreeing is emphasizing various parts of SMB to the other.
I may agree with this if, per my previous comment, SMB is construed very broadly i.e. to mean roughly emphasising or making salient shared moral views (of any kind) to each other and persuading people to adopt new moral views. (See Wittgenstein on conversion for discussion of the latter).
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’… In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning.
I think this may be misconstruing my reference to “abstruse reasoning” in the claim that “It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.” Note that I don’t say anything about abstruse reasoning being “necessary to understand the nature of moral disagreement.”
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable, (I think this would include cases like population ethics and judging that whether animals matter depends on whether they have the right kinds of capacities).
It now sounds like you might think that such reflections are on an “equal footing” with judgments that are more immediately related to basic intuitive responses, in which case there may be little or no remaining disagreement. There may be some residual disagreement if you think that such relatively rarefied reflections can’t count as meta-reflections/legitimate moral reasoning, but I don’t think that is the view which you are defending now. My sense is that more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency, in which case it doesn’t seem to me like any line of moral argument is ruled out or called into question by your metaethical account. That is fine in my view since I think that it’s appropriate that philosophical reflections should ‘leave everything as it is.’
Thanks for the lively discussion! We’ve covered a lot of ground, so I plan to try to condense what was said into a follow-up blog post making similar points as the OP but taking into account all of your clarifications.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing?
‘Meta-reactions’ are the subset of our universalizable preferences which express preferences over other preferences (and/or their relation). What it means to be ‘placed on equal footing’ is that all of these preferences are comparable. Which of them will take precedence in a certain judgement depends on the relative intensity of feeling for each preference. This stands in contrast to views such as total utilitarianism in which certain preferences are considered irrational and are thus overruled independently of the force with which we feel them.
more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency
The key point here is ‘seeking consistency’: my view is that the extent to which consistency constraints are morally relevant is contingent on the individual. Any sort of consistency only carries force insofar as it is one of the given individual’s universalizable preferences. In a way, this view does ‘leave everything as it is’ for non-philosophers’ moral debates. I also have no problem with a population ethicist who sees eir task as finding functions which satisfy certain population ethics intuitions. My view only conflicts with population ethics and animal welfare ethics insofar as ey take eir conclusions as a basis for language policing. E.g. When an ethicist claims eir preferred population axiology has implications on understanding everyday uses of moral language.
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable.
Within my framework we may override disgust responses by e.g. observing that they are less strong than our other responses, or by observing that—unlike our other responses—they have multiple meta-reactions stacked against them (fairness, ‘call to universality’, etc.) and we feel those meta-reactions more strongly. I do not endorse coming up with a theory about moral value and then overriding our disgust responses because of the theoretical elegance or epistemological appeal of that theory. I’m not sure whether you have in mind the former or the latter case?
Thank you for following up, and sorry that I haven’t been able to respond as succinctly or clearly as I would’ve liked. I hope to write a follow up post which more clearly describes the flow of ideas from those contained in my comments to the original blog post as your comments have helped me see where my background assumption are likely do differ from others’.
I see now that it would be better to take a step back to explain at a higher level where I’m coming from. My line of reasoning follows from the ideas of the later Wittgenstein: many words have meaning defined solely by their use. These words do not have any further more precise meaning—no underlying rigid scientific, logical or analytic structure. Take for example ‘to expect’, what does it mean to “expect someone to ring your doorbell at 4pm”? The meaning is irreducibly a melange of criterion and is not well defined for edge cases e.g. for an amnesiac. There’s a lot more to say here, see for example ‘Philosophical Investigations’ paragraphs 570-625.
That said, I’m perhaps closer to Quine’s ‘The Roots of Reference’ than Wittgenstein when I emphasize the importance of figuring out how we first learn a word’s use. I believe that many—perhaps not all—words such as ‘to expect’, moral language, etc. have some core use cases which are particularly salient thanks to our neurological wirings, everyday activities, childhood interactions, etc. and these use cases can help us draw a line between situations in which a word is well defined and situations in which the meaning of a word breaks down.
Here’s a simple example, the command “Anticipate the past!” steps outside of the boundaries of ‘to anticipate’s meaning, because ‘to anticipate’ usually involves things in the future and thought/actions before the event. When it comes to moral language we have two problems, the first is to distinguish cases of sensible use of moral language from under-defined edge cases, and the second to distinguish between uses of moral language which are better rewritten in other terms. Let me clarify this second case using ‘to anticipate’: ‘anticipate’ can mean to foresee as in “He anticipated Carlsen’s move.”, but also look forward to as in “He greatly anticipated the celebration”. If we want to clarify the first use case, then it’s better to set aside the second and treat them separately. Here’s another example “Sedol anticipated his opponent’s knowledge of opening theory by playing a novel opening.” If Sedol always plays novel openings, and says this game was nothing special then that sentence is false. If Sedol usually never plays novel openings, but says “My opponent’s strength in opening theory was not on my mind”, what then? I would say the meaning of ‘to anticipate’ is simply under-defined in this case.
Although I can’t have done justice to Quine and Wittgenstein let’s pretend I have, and I’ll return to your specific comments.
It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.
I disagree, there is no other actual meaning beyond the sequence of uses we learn for these words. Perhaps in the future we will discover that moral language has some natural scientific basis as happened with water, but moral language strikes me as far more similar to expectation than water.
It does sound though, from your reply, that you do think that moral language exclusively concerns experiences
Just as with ‘to anticipate’ where sometimes you can anticipate without explicitly thinking of the consequence so to for people using moral language. They often do not explicitly think of these experiences, but their use of the words is still rooted in the relevant experiences (in a fuzzy way). Of course, some other uses of ‘right’ and ‘wrong’ are better seen as something entirely different e.g. ‘right’ as used to refer to following a samurai’s code of honor. This is an important point, so I’ve elaborated on it in my other reply.
I can observe that there is such-and-such level of inequality in the distribution of income in a society.
If this observation is rooted in experience i.e. extrapolating from your experience seeing people in a system with certain levels of inequality then sure. Of course since this extrapolation depends on the experiences, you should not be confident in extrapolating the right/wrongness of something solely based on a certain GINI coefficient.
But I’m not sure why we should expect any substantive normative answers to be implied by the meaning of moral language.
I do not claim that my framework supports the sort of normativity many philosophers (perhaps you too) are interested in. I do not believe talk of normative force is coherent, but I’d prefer to not go into that here. My claim is simply that my framework lets us coherently answer some questions I’m interested in. Put in different terms, I’d like to focus discussion on my argument ‘by its own lights’.
Thanks for your reply. I’m actually very sympathetic to Wittgenstein’s account of language: before I decided to move to an area with higher potential impact, I had been accepted to study for a PhD on the implications of Wittgensteinian meta-philosophy for ethics. (I wouldn’t use the term metaphilosophy in this context of course, since I was largely focused on the view expressed in PI 119 that “…we may not advance any kind of theory. There must not be anything hypothetical in our considerations. We must do away with all explanation, and description alone must take its place.”)
All that said, it seems we disagree in quite a few places.
DM:
It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.
JP:
I disagree, there is no other actual meaning beyond the sequence of uses we learn for these words.
I don’t think our use of language is limited to the kinds of cases through which we initially learn the use of particular terms. For example, we learn the use of numbers through exceptionally simple cases “If I have one banana and then another banana, I have two bananas” and then later get trained in things like multiplication etc., but then we clearly go on to use mathematical language in much more complex and creative ways, which include extending the language in radical ways. It would be a mistake to conclude that we can’t do these things because they go beyond the uses we initially learn and note that Wittgenstein doesn’t say this either in his later work in the philosophy of mathematics. I agree it’s a common Wittgensteinian move to say that our use of language breaks down when we extend it inappropriately past ordinary usage- but if you look at Wittgenstein’s treatment of mathematics it certainly does not tell mathematicians to stop doing the very complex mathematical speculation which is far removed from the ways in which we are initially trained in mathematics. Indeed, I think it’s anti-Wittgensteinian to attempt to interfere with or police the way people ordinarily use language in this way. Of course, the Wittgensteinian can call into question certain ways of thinking (e.g. that our ordinary mathematical practice implies Platonism), although we need to do careful philosophical work to highlight potential problems with specific ways of thinking. Fwiw, it seems to me like your conclusions stray into telling ordinary moral language users that they can’t use moral language (or think about moral considerations) that they otherwise do or would, though of course it would require more discussion of your precise position to determine this.
But that aside, it still seems to me to be the case that how we actually ordinarily use moral language is left quite open by your account of how we learn moral language, since you say it includes a mix of “reactions [which] include approval, preferences and beliefs.” That seems compatible, to me, with us coming to use moral language in a wide variety of ways. Of course, you could argue for a more specific genealogy of how we come to use moral language, explaining why we come to only (or at least primarily) use it to convey certain specific attitudes of (dis)approval or preferences or beliefs about preferences.
It seems like your own account of how we learn language involves us extending the use of moral language too: from first learning that bad things are disapproved (e.g. our parents disapprove of us burning ourselves in fires), then we “extend our use of moral language beyond the[se] simple cases” to introduce preferences, and (at some point) beliefs. So if you allow that much, it doesn’t seem clear why we should think that our uses of moral language are still properly limited to the kinds of uses which are (ex hypothesi) part of our initial training. It seems quite conceivable to me that we initially learn moral language in something like the way you describe, but then collectively move on to almost any number of more complex uses such as considering what we would collectively endorse in such and such scenarios. And once we go that far (which I think we should in order to adequately account for how we see people actually using moral language) I don’t think we’re in a position where we can rule out as impossible baroque speculations about population ethics etc.
I had been accepted to study for a PhD on the implications of Wittgensteinian meta-philosophy for ethics.
Well, I for one, would’ve liked to have read the thesis! Wonderful, I suppose then most of my background talk was redundant. When it comes to mathematics, I found the arguments in Kripke’s ‘Wittgenstein on Rules and Private Language’ quite convincing. I would love to see someone do an in depth translation applying everything Kripke says about arithmetic to total utilitarianism. I think this would be quite useful, and perhaps work well with my ideas here.
Yes, I agree that what I’ve been doing looks a lot like language policing, so let me clarify. Rather than claiming talk of population ethics etc. is invalid or incoherent, it would be more accurate to say I see it as apparently baseless and that I do not fully understand the connection with our other uses of moral language. When others choose to extend their moral language to population ethics, their language is likely coherent within their community. Probably, they have found a group within which they share similar inductive bias which endows their novel uses of moral language with reference. However, insofar as they expect me to follow along with this extension (indeed insofar as they expect their conclusions about population ethics to have force for non-population-ethicists) they must explain how their extension of moral language follows from our shared ostensive basis for moral language and our shared inductive biases. My arguments have attempted to show that our shared ostensive basis for moral language does not straight-forwardly support talk of population ethics, because such talk does not share the same basis in negatively/positively valenced emotions.
Put in more Wittgensteinian terms, population ethics language bears a family resemblance to our more mundane use of moral language, but it does not share the universal motivating force provided by our common emotional reactions to e.g. a hit a to the head. Of course, probably, some philosophers react viscerally and emotionally to talk of the repugnant conclusion. In that case, for them the repugnant conclusion carries some force that it does not for others. So to return to the policing question, I am not policing insofar as I agree that their language is meaningful and provides insight to their community. Claims like “Total utilitarianism better captures our population ethics intuitions than …” can be true or false. However, any move to then say “Your use of moral language should be replaced by uses which agree with our population ethics intuitions” seems baseless and perhaps could be described as an act of policing on the part of the speaker.
>When it comes to mathematics, I found the arguments in Kripke’s ‘Wittgenstein on Rules and Private Language’ quite convincing. I would love to see someone do an in depth translation applying everything Kripke says about arithmetic to total utilitarianism. I think this would be quite useful, and perhaps work well with my ideas here.
That makes sense. I personally think that “Kripkenstein’s” views are quite different from Wittgenstein’s own views on mathematics.
It seems there’s a bit of a disanalogy between the case of simple addition and the case of moral language. In the case of addition we observe widespread consensus (no-one feels any inclination to start using quus for whatever reason). Conversely it seems to me that moral discourse is characterised by widespread disagreement i.e. we can sensibly disagree about whether it’s right or wrong to torture, whether it’s right or wrong for a wrongdoer to suffer, whether it’s good to experience pleasure if it’s unjustly earned and so on. This suggests to me that moral terms aren’t defined by reference to certain concrete things we agree are good.
>Yes, I agree that what I’ve been doing looks a lot like language policing, so let me clarify. Rather than claiming talk of population ethics etc. is invalid or incoherent, it would be more accurate to say I see it as apparently baseless and that I do not fully understand the connection with our other uses of moral language… insofar as they expect me to follow along with this extension (indeed insofar as they expect their conclusions about population ethics to have force for non-population-ethicists) they must explain how their extension of moral language follows from our shared ostensive basis for moral language and our shared inductive biases. My arguments have attempted to show that our shared ostensive basis for moral language does not straight-forwardly support talk of population ethics, because such talk does not share the same basis in negatively/positively valenced emotions.
OK so it sounds like the core issue here is the question of whether moral terms are defined at their core by reference to valenced emotions then, which I’ll continue discussing in the other thread.
Thanks for the reply. I guess I’m still confused about what specific attitudes you see as involved in moral judgments, whether approval, preferences, beliefs or some more complex combination of these etc. It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.
It does sound though, from your reply, that you do think that moral language exclusively concerns experiences (and our evaluations of experiences). If so, that doesn’t seem right to me. For one, it seems that the vast majority of people (outside of welfarist EA circles) don’t exclusively or even primarily make moral judgements or utterances which are about the goodness or badness of experiences (even indirectly). It also doesn’t seem to me like the kind of simple moral utterances which ex hypothesi train people in the use of moral language at an early age primarily concern experiences and their badness (or preferences for that matter). It seems equally if not more plausible to speculate that such utterances typically involve injunctions (with the threat of punishment and so on).
Thanks for addressing this. This still isn’t quite clear to me i.e. what exactly is meant by ‘how would you react as person W who observes X and Y’? What conditions of W observing X and Y are required?. For example, does it only specifically refer to how I would react if I were directly observing an act of torture in the room or does it permit broader ‘observations’ i.e. I can observe that there is such-and-such level of inequality in the distribution of income in a society. The more restrictive definitions don’t seem adequate to me to capture how we actually use moral language, but the more permissive ones, which are more adequate, don’t seem to suffice to rule out me making judgements about the repugnant conclusion and so on.
I agree that answers to population ethics aren’t directly entailed by the definition of moral terms. But I’m not sure why we should expect any substantive normative answers to be implied by the meaning of moral language. Moral terms might mean “I endorse x”, but any number of different considerations (including population ethics, facts about neurobiology) might be relevant to whether I endorse x (especially so if you allow that I might have all kinds of meta-reactions about whether my reactions are based on appropriate considerations etc.).
Here’s another way of explaining where I’m coming from. The meaning of our words is set by ostensive definition plus our inductive bias. E.g. when defining red and purple we agree upon some prototypical cases of red and purple by perhaps pointing at red and saying ‘red’. Then upon seeing maroon for the first time, we call it red because our brains process maroon in a similar way to how they process red. (Incidentally, the first part—pointing at red—is also only meaningful because we share inductive biases around pointing and object boundaries.) Of course in some lucky cases, e.g. ‘water’, ‘one’, etc., a scientific or formal definition appears coextensive with the definition and so is preferred for some purposes.
As another example take durian. Imagine you are trying to explain what the word tasty means and so you feed someone some things that are tasty to you e.g. candy and durian. Unfortunately people have very different reactions to durian, so it would not be a good idea to use durian to try to define ‘tasty’. In fact, if all the human race ate was durian, we could not use the word tasty in the same way. In a world with only one food and in which people randomly liked or disliked that food, a word similar to ‘tasty’ would describe people (and their reactions) not the food itself.
Returning to moral language, we almost uniformly agree about the experience of tripping and skinning your knee. This lets moral language get off the ground, and puts us in our world as opposed to the ‘durian only moral world’. There are some examples of phenomena over which we disagree: perhaps inegalitarian processes are one. Imagine a wealthy individual decides to donate her money to the townspeople, but distributes her wealth based on an apparently arbitrary 10 second interview with each townsperson. Perhaps some people react negatively, feeling displeasure and disgust when hearing about this behavior, whereas others see this behavior as just as good as if she had uniformly distributed the wealth. This connects with what I was saying above:
I privilege uses of moral language as applied to experiences and in particular pain/pleasure because these are the central cases over which there is agreement, and from which the other uses of moral language flow. There’s considerable variance in our inductive biases, and so perhaps for some people the most natural way to extend uses of moral language from its ostensive childhood basis includes inegalitarian processes. Nevertheless inegalitarian processes cannot be seen as the basis for moral language. That would be like claiming the experience of eating durian can be used to define ‘tasty’. I do agree that injunctions may perhaps be the first use we learn of ‘bad’, but the use of ‘bad’ as part of moral language necessarily connects with its use in referring to pain and pleasure, otherwise it would be indistinguishable from expressions of desire/threats on the part of the speaker.
>I privilege uses of moral language as applied to experiences and in particular pain/pleasure because these are the central cases over which there is agreement, and from which the other uses of moral language flow… I do agree that injunctions may perhaps be the first use we learn of ‘bad’, but the use of ‘bad’ as part of moral language necessarily connects with its use in referring to pain and pleasure, otherwise it would be indistinguishable from expressions of desire/threats on the part of the speaker.
OK, on a concrete level, I think we just clearly just disagree about how central references to pleasure and pain are in moral language or how necessary they are. I don’t think they are particularly central, or even that there is much more consensus about the moral badness of pain/goodness of pleasure than about other issues (e.g. stealing others’ property, lying, loyalty/betrayal).
It also sounds like you think that for us to learn the meaning of moral language there needs to be broad consensus about the goodness/badness of specific things (e.g. pleasure/pain). I don’t think this is so. Take the tastiness example: we don’t need people to agree even slightly about whether chocolate/durian are tasty or yucky to learn the meanings of terms. We can observe that when people say chocolate/durian is tasty they go “mmm”, display characteristic facial expressions and eat more of it and seek to acquire more in the future, whereas when they say chocolate/durian is yucky they say “eugh” display other characteristic facial expressions, stop eating it and show disinterest in acquiring more in the future. We don’t need any agreement at all, as far as I can tell, about which specific things are tasty or yucky to learn the meaning of the terms. Likewise with moral language, I don’t think we broadly need widespread agreement about whether specific things are good/bad to learn that if someone says something is “bad” this means they don’t want us to do it, they disapprove of it and we will be punished if we do it etc. Generally I don’t think there’s much connection between the meaning of moral terms and specific things being good or bad: this is what I mean when I said “But I’m not sure why we should expect any substantive normative answers [i.e. specific things being good or bad on the first order level] to be implied by the meaning of moral language”- nothing to do with a particular conception of “normativity.”
Thanks for the clarification, this certainly helps us get more concrete.
I agree that I was exaggerating my case. In durian-type-food-only worlds we would merely no longer expect ‘X is tasty’ to convey information to the listener about whether she/he should eat it. This difference does the work in the analogy with morality. Moral language is distinct from expression of other preferences in that we expect morality-based talk to be somehow more universal instead of merely expressing our personal preference.
I believe that we have much greater overlap in our emotional reaction to experiencing certain events e.g. being hit, and we have much greater overlap in our emotional reaction to witnessing certain painful events e.g. seeing someone lose their child to an explosion. Perhaps you don’t want to use the word consensus to describe this phenomenon? Or else you think these sorts of universally shared reactions are unimportant to how we learn moral language?
The way you seem to be describing moral language, I’m not clear on how it is distinct from desire and other preferences? If we did not have shared aversions to pain, and a shared aversion to seeing someone in pain, then moral language would no longer be distinguishable from talk of desire. I suspect you again disagree here, so perhaps you could clarify how, on your account, we learn to distinguish moral injunctions from personal preference based injunctions?
JP:
>I believe that we have much greater overlap in our emotional reaction to experiencing certain events e.g. being hit, and we have much greater overlap in our emotional reaction to witnessing certain painful events e.g. seeing someone lose their child to an explosion.
I agree individuals tend to share an aversion to themselves being in pain. I don’t think there’s a particularly noteworthy consensus about it being bad for other people to be in pain or that it’s good for other people to have more pleasure. People routinely seem to think that it’s good for others to suffer and be indifferent about others experiencing more pleasure. People sometimes try to argue that people really only want people to suffer in order to reduce suffering, for example, but this doesn’t strike me as particularly plausible or as how people characterise their own views when asked. So valenced experience doesn’t strike me as having a particularly central place in ordinary moral psychology IMO.
>I’m not clear on how it is distinct from desire and other preferences? If we did not have shared aversions to pain, and a shared aversion to seeing someone in pain, then moral language would no longer be distinguishable from talk of desire. I suspect you again disagree here, so perhaps you could clarify how, on your account, we learn to distinguish moral injunctions from personal preference based injunctions?
Sure, I just think that moral language differs from desire-talk in various ways unrelated to the specific objects under discussion, i.e. they express different attitudes and perform different functions. For example, if I say “I desire that you give me $10” merely communicates that I would like you to give me $10, there’s no implication that you would be apt for disapproval if you didn’t. But if I say “It is morally right that you give me $10″ this communicates that you would be wrong not to give me $10 and would be apt for disapproval if you did not. (I’m not committed to this particular analysis of the meaning of moral terms of course, this is just an example). I think this applies even if we’re referring to pleasure/pain. One can sensibly say “I like/don’t like this pleasant/painful sensation” without thereby saying “It is morally right that you act to promote/alleviate my experience” or one could say “It is/is not morally right that you act to promote/alleviate my experience.”
Sorry, I should’ve been more clear about what I’m referring to. When you say “People routinely seem to think” and “People sometimes try to argue”, I suspect we’re talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry. (To reiterate, I focus on these innate tendencies, because they are what let us establish common reference. Downstream uses of moral and other language are then determined by our shared and personal inductive biases.)
Exciting, perhaps we’ve gotten to the crux of our disagreement here! How do we learn what cases are have “aptness for disapproval”? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering. Of course, when you learn language from your parents they need not and cannot point at your aversions, but you implicitly use these aversions as the best fitting explanation to generalize your parents language. In effect, your task as a toddler is to figure out why your parents sometimes say “that was wrong, don’t do that” instead of “I didn’t like what you did, don’t do that”. I suggest the “that was wrong” cases more often involve a shared reaction on your part—prototypically when your parents are referring to something that caused pain. Compare to a child whose parents’ whose notion of bad includes burning your fingers but only on weekends, she will have more difficulty learning their uses of moral language, because this use does not match our genetic/neurological biases.
Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him. He may utter moral words but always with entirely different meaning (extension). In slogan form, “There’s no arguing with a madman”. Or take another sort of madman: someone who agrees with you that usually hurting people is wrong, but then remorselessly goes berserk when he sees anyone with a nose of a certain shape. He simply has a different inductive bias (mental condition). If you deny the significance of the consensus I described in the first paragraph, how do you distinguish between these two madmen and more sensible cases of moral disagreement?
In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual’s preferences which had a universalisable character—i.e. those preferences which she would prefer everyone to hold. However, peoples’ universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them “apt for dissaproval” across the whole community, and this is why we use moral language.
Yes, naturally. The reason why the painful sensations matter is that they help us arrive at a shared understanding of the “aptness for disapproval” you describe.
[From DM’s other comment]
Yes, I agree work has to be done to explain why utilitarianism parallels arithmetic despite apparent differences. I will likely disagree with you in many places, so hopefully I’ll find time to re-read Kripke. I would enjoy talking about it then.
Apologies in advance for long reply.
Thanks for clarifying. This doesn’t change my response though since I don’t think there’s a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that’s just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.
Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there’s a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn’t seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children’s initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.
Just to clarify one thing: when I said that “It is morally right that you give me $10” might communicate (among other things) that you are apt for disapproval if you don’t give me $10 (which is not implied by saying “I desire that you give me $10“), I had in mind something like the following: when I say “It is morally right that you give me $10” this communicates inter alia that I will disapprove of you if you don’t give me $10, that I think think it’s appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don’t etc. Maybe it involves a bunch of other attitudes and practical implications as well. That’s in contrast to me just saying “I desire that you give me $10” which needn’t imply any of the above. That’s what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I’m not sure how you interpreted “apt[ness] for disapproval” but it sounds from your subsequent comments like you think it means something other than what I mean.
I think the fundamental disagreement here is that I don’t think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what “morally wrong” means. We could learn, for example, that “That’s wrong!” expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don’t really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.
I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn’t seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that “wrong” refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that “I like this” refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the “wrong” things and what unites the “liked” things and reference to these concrete categories play a central role in fixing the meaning of the terms.
But I don’t think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that “that’s morally wrong” implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas “I don’t like that” doesn’t carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).
Also, I recognise that you might say “I don’t think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions” (or some such). But I don’t think this changes my response, since I don’t think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you’ll be punished and I would disapprove of others who don’t disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren’t implied by mere expressions of desire or distaste etc.
I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don’t think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of “That’s wrong!” was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says “Hurting people is good” as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.
I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).
I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.
Crucially, I don’t think that, absent moral consensus, moral utterances would reduce to “function[ing] in conversation just as all other preferences do.” Saying “I think it is morally required for you to give me $10″ would still perform a different function than saying “I prefer that you to give me $10” for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I’ll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn’t imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn’t entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don’t think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.
Thanks for the long reply. I feel like our conversation becomes more meaningful as it goes on.
Yes, it’s hard to point to exactly what I’m talking about, and perhaps even somewhat speculative since the modern world doesn’t have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers’ PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point. When I watched Land of Hope and Glory I found seeing the suffering of animals disturbing, and this would obviously be worse if the documentary had people suffering in similar conditions to the animals. I am confident that most people have similar reactions, but if they don’t I would change my view of the above. The most relevant childhood experiences are likely those which involve prolonged pain: a skinned knee, a fever, a burn etc. I think what I’m trying to point at could be described as ‘pointless suffering’. Pain in the context of humor, cheap thrills, couch-viewing etc. is not what I’m referring to.
This seems plausible to me, and I don’t claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language’s distinctive features cf. my original post:
Perhaps some of these “involuntary immediate reaction”s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.
Let me take this opportunity to emphasize that I agree: The subsequent tendency to disapprove following use of moral language is an important feature of moral language.
This is the key point. Why do we express disapproval of others when they don’t disapprove of the person who did the immoral act? I claim it’s because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB. Here’s a prototypical example: an aunt chastises a mother for failing to stop her husband from striking their child in anger. The aunt does so because she knows the mother cares about her children, and more generally doesn’t want people to be hurt unreasonably. If the mother were one of our madmen from above, then the aunt would find it futile to chastise her. To return to my example of “a world filled with people whose innate biases varied randomly”, in that world we would not find it fruitful to disapprove of others when they didn’t disapprove of you. Do you not agree that disapproval would have less significance in that world?
True, the learner merely has to learn that they have within themselves some particular disposition towards the morally wrong cases. These dispositions may be various: aversion to pain, aversion to unfairness, guilt, etc. The learner later finds it useful to continue to use moral language, because others outside of her home share these dispositions to morally wrong cases. To hyperbolize this point: moral language would have a different role if SMB were similar to eye color i.e. usually shared within the family, but diverse outside of the family.
I agree that it would be natural to call “Hurting people is good” a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman. E.g. the kind of disagreement I describe in the family with the aunt above.
Yes, I agree. However, cases in which our conversations are founded on SMB have a distinctive character which is of great importance. I agree that the view described in my original post likely becomes less relevant when applied to disagreements across moral cultures i.e. between groups with very different SMB. I’m not particularly bothered by this caveat since most discussion of object-level ethics seems to occur within communities of shared SMB e.g. medical ethics, population ethics, etc.
Let me note that I agree (and think it’s uncontroversial) that people often have extreme emotional reactions (including moral reactions) to seeing things like people blown to bits in front of them. So this doesn’t seem like a crux in our disagreement (I think everyone, whatever their metaethical position, endorses this point).
OK, so we also agree that people may have a host of innate emotional reactions to things (including, but not limited to valenced emotions).
I think I responded to this point directly in the last paragraph of my reply. In brief: if no-one could ever be brought to share any moral views, this would indeed vitiate a large part (though not all) of the function of moral language. But this doesn’t mean “that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things.” All that is required is “some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.”
To approach this from another angle: suppose people are somewhat capable of being persuaded to share others views and maybe even, in fact, do tend to share some moral views (which I think is obviously actually true), although they may radically disagree to some extent. Now suppose that the meaning of moral language is just something like what I sketched out above (i.e. I disapprove of people who x, I disapprove of those who don’t disapprove of those who x etc.).* In this scenario it seems completely possible for moral language to function even though the meaning of moral terms themselves is (ex hypothesi) not tied up in any way with agreement that certain specific things are morally good/bad.
*As I argued above, I also think that such a language could easily be learned without consensus on certain things being good or bad.
Hmm, it sounds like maybe you don’t think that the meaning of moral of moral terms is tied to certain specific things being judged morally good/bad at all, in which case there may be little disagreement regarding this thread of the discussion.
I agree that moral disagreement between people who share some moral presuppositions has something of a distinctive character from discourse between people who don’t share any moral presuppositions. In the real world, of course, there are always some shared background presuppositions (broadly speaking) even if these are not always at all salient to disagreement.
That said, I don’t know whether I endorse your view about the role of the Shared Moral Base. As I noted above, I do think that there are a host of moral reactions which are innate (Moral Foundations, if you will). But I don’t think these or applications of these play an ‘ostensive’ role (I think we have innate dispositions to respond in certain ways intuitively, but our actual judgements and moral theories and concepts get formed in a pretty environmentally and socially contingent way, leading to a lot of fuzziness and indeterminacy). And I don’t privilege these intuitive views as particularly foundational in the philosophical sense (despite the name).
This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad (perhaps especially when, as in modern industrialised countries, impure things typically don’t really pose much of a threat). It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.
[From a previous DM comment]
Sorry, I should’ve addressed this directly. The SMB-community picture is somewhat misleading. In reality, you likely have partial overlap in SMB and the intersection of your whole community of friends is less (but does include pain aversion). Moral disagreement attains a particular level of meaningfulness when both speakers share SMB relevant to their topic of debate. I now realize that my use of ‘ostensive’ was mistaken. I meant to say, as perhaps has already become clear, that SMB lends substance to moral disagreement. SMB plays a role in defining moral disagreement, but, as you say, SMB likely plays a lesser role when it comes to using moral language outside of disagreement.
If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any ‘abstruse reasoning’. I argue that what we do when disagreeing is emphasizing various parts of SMB to the other. In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning. As described in the OP, for non-philosophers abstruse reasoning only matters as mediated by meta-reactions. In effect, reasoning constraints enter the picture as a subset of our universalizable preferences, but as such there’s no basis for them to override our other object-level universalizable preferences. Of course, I use talk of preferences here loosely; I do believe that these preferences have vague intensities which may sometimes be compared. E.g. someone may feel their meta-reactions particularly strongly and so these preferences may carry more weight than other preferences because of this intensity of feeling.
I’m not sure if I know what you’re talking about by ‘impure things’. Sewage perhaps? I’m not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.
Independently of the meaning of ‘impure’, let me respond to “people routinely overcome and override this basic disposition”: certainly people’s moral beliefs often come into conflict e.g. trolley problems. I would describe most of these cases as having multiple conflicting universalizable preferences in play. Sometimes one of those preferences is a meta-reaction, e.g. ‘call to universality’, and if the meta-reaction is more salient or intense then perhaps it carries more weight than a ‘basic disposition’. Let me stress again that I do not make a distinction between universalizable preferences which are ‘basic dispositions’ and those which I refer to as meta-reactions. These should be treated on an equal footing.
I’m afraid now the working week has begun again I’m not going to have so much time to continue responding, but thanks for the discussion.
I’m thinking of the various things which fall under the Purity/Disgust (or Sanctity/Degradation) foundation in Haidt’s Moral Foundations Theory. This includes a lot of things related to not eating or otherwise exposing yourself to things which elicit disgust, as well as a lot of sexual morality. Rereading the law books of the Bible gives a lot of examples. The sheer prevalence of these concerns in ancient morality, especially as opposed to modern concerns like promoting positive feeling, is also quite telling IMO. For more on the distinctive role of disgust in morality see here or here.
I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing? If so then I’m inclined to agree, but then I don’t think this account implies anything much at the practical level (e.g. how we should think about animals, population ethics etc.).
I may agree with this if, per my previous comment, SMB is construed very broadly i.e. to mean roughly emphasising or making salient shared moral views (of any kind) to each other and persuading people to adopt new moral views. (See Wittgenstein on conversion for discussion of the latter).
I think this may be misconstruing my reference to “abstruse reasoning” in the claim that “It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.” Note that I don’t say anything about abstruse reasoning being “necessary to understand the nature of moral disagreement.”
I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable, (I think this would include cases like population ethics and judging that whether animals matter depends on whether they have the right kinds of capacities).
It now sounds like you might think that such reflections are on an “equal footing” with judgments that are more immediately related to basic intuitive responses, in which case there may be little or no remaining disagreement. There may be some residual disagreement if you think that such relatively rarefied reflections can’t count as meta-reflections/legitimate moral reasoning, but I don’t think that is the view which you are defending now. My sense is that more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency, in which case it doesn’t seem to me like any line of moral argument is ruled out or called into question by your metaethical account. That is fine in my view since I think that it’s appropriate that philosophical reflections should ‘leave everything as it is.’
Thanks for the lively discussion! We’ve covered a lot of ground, so I plan to try to condense what was said into a follow-up blog post making similar points as the OP but taking into account all of your clarifications.
‘Meta-reactions’ are the subset of our universalizable preferences which express preferences over other preferences (and/or their relation). What it means to be ‘placed on equal footing’ is that all of these preferences are comparable. Which of them will take precedence in a certain judgement depends on the relative intensity of feeling for each preference. This stands in contrast to views such as total utilitarianism in which certain preferences are considered irrational and are thus overruled independently of the force with which we feel them.
The key point here is ‘seeking consistency’: my view is that the extent to which consistency constraints are morally relevant is contingent on the individual. Any sort of consistency only carries force insofar as it is one of the given individual’s universalizable preferences. In a way, this view does ‘leave everything as it is’ for non-philosophers’ moral debates. I also have no problem with a population ethicist who sees eir task as finding functions which satisfy certain population ethics intuitions. My view only conflicts with population ethics and animal welfare ethics insofar as ey take eir conclusions as a basis for language policing. E.g. When an ethicist claims eir preferred population axiology has implications on understanding everyday uses of moral language.
Within my framework we may override disgust responses by e.g. observing that they are less strong than our other responses, or by observing that—unlike our other responses—they have multiple meta-reactions stacked against them (fairness, ‘call to universality’, etc.) and we feel those meta-reactions more strongly. I do not endorse coming up with a theory about moral value and then overriding our disgust responses because of the theoretical elegance or epistemological appeal of that theory. I’m not sure whether you have in mind the former or the latter case?
Thank you for following up, and sorry that I haven’t been able to respond as succinctly or clearly as I would’ve liked. I hope to write a follow up post which more clearly describes the flow of ideas from those contained in my comments to the original blog post as your comments have helped me see where my background assumption are likely do differ from others’.
I see now that it would be better to take a step back to explain at a higher level where I’m coming from. My line of reasoning follows from the ideas of the later Wittgenstein: many words have meaning defined solely by their use. These words do not have any further more precise meaning—no underlying rigid scientific, logical or analytic structure. Take for example ‘to expect’, what does it mean to “expect someone to ring your doorbell at 4pm”? The meaning is irreducibly a melange of criterion and is not well defined for edge cases e.g. for an amnesiac. There’s a lot more to say here, see for example ‘Philosophical Investigations’ paragraphs 570-625.
That said, I’m perhaps closer to Quine’s ‘The Roots of Reference’ than Wittgenstein when I emphasize the importance of figuring out how we first learn a word’s use. I believe that many—perhaps not all—words such as ‘to expect’, moral language, etc. have some core use cases which are particularly salient thanks to our neurological wirings, everyday activities, childhood interactions, etc. and these use cases can help us draw a line between situations in which a word is well defined and situations in which the meaning of a word breaks down.
Here’s a simple example, the command “Anticipate the past!” steps outside of the boundaries of ‘to anticipate’s meaning, because ‘to anticipate’ usually involves things in the future and thought/actions before the event. When it comes to moral language we have two problems, the first is to distinguish cases of sensible use of moral language from under-defined edge cases, and the second to distinguish between uses of moral language which are better rewritten in other terms. Let me clarify this second case using ‘to anticipate’: ‘anticipate’ can mean to foresee as in “He anticipated Carlsen’s move.”, but also look forward to as in “He greatly anticipated the celebration”. If we want to clarify the first use case, then it’s better to set aside the second and treat them separately. Here’s another example “Sedol anticipated his opponent’s knowledge of opening theory by playing a novel opening.” If Sedol always plays novel openings, and says this game was nothing special then that sentence is false. If Sedol usually never plays novel openings, but says “My opponent’s strength in opening theory was not on my mind”, what then? I would say the meaning of ‘to anticipate’ is simply under-defined in this case.
Although I can’t have done justice to Quine and Wittgenstein let’s pretend I have, and I’ll return to your specific comments.
I disagree, there is no other actual meaning beyond the sequence of uses we learn for these words. Perhaps in the future we will discover that moral language has some natural scientific basis as happened with water, but moral language strikes me as far more similar to expectation than water.
Just as with ‘to anticipate’ where sometimes you can anticipate without explicitly thinking of the consequence so to for people using moral language. They often do not explicitly think of these experiences, but their use of the words is still rooted in the relevant experiences (in a fuzzy way). Of course, some other uses of ‘right’ and ‘wrong’ are better seen as something entirely different e.g. ‘right’ as used to refer to following a samurai’s code of honor. This is an important point, so I’ve elaborated on it in my other reply.
If this observation is rooted in experience i.e. extrapolating from your experience seeing people in a system with certain levels of inequality then sure. Of course since this extrapolation depends on the experiences, you should not be confident in extrapolating the right/wrongness of something solely based on a certain GINI coefficient.
I do not claim that my framework supports the sort of normativity many philosophers (perhaps you too) are interested in. I do not believe talk of normative force is coherent, but I’d prefer to not go into that here. My claim is simply that my framework lets us coherently answer some questions I’m interested in. Put in different terms, I’d like to focus discussion on my argument ‘by its own lights’.
Thanks for your reply. I’m actually very sympathetic to Wittgenstein’s account of language: before I decided to move to an area with higher potential impact, I had been accepted to study for a PhD on the implications of Wittgensteinian meta-philosophy for ethics. (I wouldn’t use the term metaphilosophy in this context of course, since I was largely focused on the view expressed in PI 119 that “…we may not advance any kind of theory. There must not be anything hypothetical in our considerations. We must do away with all explanation, and description alone must take its place.”)
All that said, it seems we disagree in quite a few places.
DM:
JP:
I don’t think our use of language is limited to the kinds of cases through which we initially learn the use of particular terms. For example, we learn the use of numbers through exceptionally simple cases “If I have one banana and then another banana, I have two bananas” and then later get trained in things like multiplication etc., but then we clearly go on to use mathematical language in much more complex and creative ways, which include extending the language in radical ways. It would be a mistake to conclude that we can’t do these things because they go beyond the uses we initially learn and note that Wittgenstein doesn’t say this either in his later work in the philosophy of mathematics. I agree it’s a common Wittgensteinian move to say that our use of language breaks down when we extend it inappropriately past ordinary usage- but if you look at Wittgenstein’s treatment of mathematics it certainly does not tell mathematicians to stop doing the very complex mathematical speculation which is far removed from the ways in which we are initially trained in mathematics. Indeed, I think it’s anti-Wittgensteinian to attempt to interfere with or police the way people ordinarily use language in this way. Of course, the Wittgensteinian can call into question certain ways of thinking (e.g. that our ordinary mathematical practice implies Platonism), although we need to do careful philosophical work to highlight potential problems with specific ways of thinking. Fwiw, it seems to me like your conclusions stray into telling ordinary moral language users that they can’t use moral language (or think about moral considerations) that they otherwise do or would, though of course it would require more discussion of your precise position to determine this.
But that aside, it still seems to me to be the case that how we actually ordinarily use moral language is left quite open by your account of how we learn moral language, since you say it includes a mix of “reactions [which] include approval, preferences and beliefs.” That seems compatible, to me, with us coming to use moral language in a wide variety of ways. Of course, you could argue for a more specific genealogy of how we come to use moral language, explaining why we come to only (or at least primarily) use it to convey certain specific attitudes of (dis)approval or preferences or beliefs about preferences.
It seems like your own account of how we learn language involves us extending the use of moral language too: from first learning that bad things are disapproved (e.g. our parents disapprove of us burning ourselves in fires), then we “extend our use of moral language beyond the[se] simple cases” to introduce preferences, and (at some point) beliefs. So if you allow that much, it doesn’t seem clear why we should think that our uses of moral language are still properly limited to the kinds of uses which are (ex hypothesi) part of our initial training. It seems quite conceivable to me that we initially learn moral language in something like the way you describe, but then collectively move on to almost any number of more complex uses such as considering what we would collectively endorse in such and such scenarios. And once we go that far (which I think we should in order to adequately account for how we see people actually using moral language) I don’t think we’re in a position where we can rule out as impossible baroque speculations about population ethics etc.
Well, I for one, would’ve liked to have read the thesis! Wonderful, I suppose then most of my background talk was redundant. When it comes to mathematics, I found the arguments in Kripke’s ‘Wittgenstein on Rules and Private Language’ quite convincing. I would love to see someone do an in depth translation applying everything Kripke says about arithmetic to total utilitarianism. I think this would be quite useful, and perhaps work well with my ideas here.
Yes, I agree that what I’ve been doing looks a lot like language policing, so let me clarify. Rather than claiming talk of population ethics etc. is invalid or incoherent, it would be more accurate to say I see it as apparently baseless and that I do not fully understand the connection with our other uses of moral language. When others choose to extend their moral language to population ethics, their language is likely coherent within their community. Probably, they have found a group within which they share similar inductive bias which endows their novel uses of moral language with reference. However, insofar as they expect me to follow along with this extension (indeed insofar as they expect their conclusions about population ethics to have force for non-population-ethicists) they must explain how their extension of moral language follows from our shared ostensive basis for moral language and our shared inductive biases. My arguments have attempted to show that our shared ostensive basis for moral language does not straight-forwardly support talk of population ethics, because such talk does not share the same basis in negatively/positively valenced emotions.
Put in more Wittgensteinian terms, population ethics language bears a family resemblance to our more mundane use of moral language, but it does not share the universal motivating force provided by our common emotional reactions to e.g. a hit a to the head. Of course, probably, some philosophers react viscerally and emotionally to talk of the repugnant conclusion. In that case, for them the repugnant conclusion carries some force that it does not for others. So to return to the policing question, I am not policing insofar as I agree that their language is meaningful and provides insight to their community. Claims like “Total utilitarianism better captures our population ethics intuitions than …” can be true or false. However, any move to then say “Your use of moral language should be replaced by uses which agree with our population ethics intuitions” seems baseless and perhaps could be described as an act of policing on the part of the speaker.
>When it comes to mathematics, I found the arguments in Kripke’s ‘Wittgenstein on Rules and Private Language’ quite convincing. I would love to see someone do an in depth translation applying everything Kripke says about arithmetic to total utilitarianism. I think this would be quite useful, and perhaps work well with my ideas here.
That makes sense. I personally think that “Kripkenstein’s” views are quite different from Wittgenstein’s own views on mathematics.
It seems there’s a bit of a disanalogy between the case of simple addition and the case of moral language. In the case of addition we observe widespread consensus (no-one feels any inclination to start using quus for whatever reason). Conversely it seems to me that moral discourse is characterised by widespread disagreement i.e. we can sensibly disagree about whether it’s right or wrong to torture, whether it’s right or wrong for a wrongdoer to suffer, whether it’s good to experience pleasure if it’s unjustly earned and so on. This suggests to me that moral terms aren’t defined by reference to certain concrete things we agree are good.
>Yes, I agree that what I’ve been doing looks a lot like language policing, so let me clarify. Rather than claiming talk of population ethics etc. is invalid or incoherent, it would be more accurate to say I see it as apparently baseless and that I do not fully understand the connection with our other uses of moral language… insofar as they expect me to follow along with this extension (indeed insofar as they expect their conclusions about population ethics to have force for non-population-ethicists) they must explain how their extension of moral language follows from our shared ostensive basis for moral language and our shared inductive biases. My arguments have attempted to show that our shared ostensive basis for moral language does not straight-forwardly support talk of population ethics, because such talk does not share the same basis in negatively/positively valenced emotions.
OK so it sounds like the core issue here is the question of whether moral terms are defined at their core by reference to valenced emotions then, which I’ll continue discussing in the other thread.