Hmm, I guess at first glance it seems like thatās making moral uncertainty seem much weirder and harder than it really is. I think moral uncertainty can be pretty usefully seen as similar to empirical uncertainty in many ways. And on empirical matters, we constantly have some degree of credence in each of multiple contradictory possibilities, and thatās clearly how it should be (rather than us being certain on any given empirical matter, e.g. whether itāll rain tomorrow or what the population of France is). Furthermore, we clearly shouldnāt just act on whatās most likely, but rather do something closer to expected value reasoning.
Thereās debate over whether we should do precisely expected value reasoning in all cases, but itās clear for example that itād be a bad idea to accept a 49% chance of being tortured for 10 years in exchange for a 100% chance of getting a dollarāitās clear we shouldnāt think āWell, itās unlikely weāll get tortured, so we should totally ignore that risk.ā
And I donāt think it feels weird or leads to absurdities or incoherence to simultaneously think I might get a job offer due to an application but probably wonāt, or might die if I donāt wear a seatbelt but probably wonāt, and take those chances of upsides or downsides into account when acting?
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnāt act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iām misunderstanding your view?
Iām a bit less confident of this in the case of metaethics, but it sounded like you were against taking even just moral uncertainty into account?
However, I havenāt read his book, and there might be arguments there that would convince me if I had.
You might enjoy some of the posts tagged moral uncertainty, for shorter versions of some of the explanations and arguments, including my attempt to summarise ideas from MacAskillās thesis (which was later adapted into the book).
So I agree with you that we should apply expected value reasoning in most cases. The cases in which I donāt think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them.
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnāt act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iām misunderstanding your view?
I think we should apply expected value reasoning in ethics too. However, I donāt think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above.
However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However, the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications).
I am sorry if this isnāt expressed as clear as one should hope. Iāll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I havenāt considered yet.
Hmm, I guess at first glance it seems like thatās making moral uncertainty seem much weirder and harder than it really is. I think moral uncertainty can be pretty usefully seen as similar to empirical uncertainty in many ways. And on empirical matters, we constantly have some degree of credence in each of multiple contradictory possibilities, and thatās clearly how it should be (rather than us being certain on any given empirical matter, e.g. whether itāll rain tomorrow or what the population of France is). Furthermore, we clearly shouldnāt just act on whatās most likely, but rather do something closer to expected value reasoning.
Thereās debate over whether we should do precisely expected value reasoning in all cases, but itās clear for example that itād be a bad idea to accept a 49% chance of being tortured for 10 years in exchange for a 100% chance of getting a dollarāitās clear we shouldnāt think āWell, itās unlikely weāll get tortured, so we should totally ignore that risk.ā
And I donāt think it feels weird or leads to absurdities or incoherence to simultaneously think I might get a job offer due to an application but probably wonāt, or might die if I donāt wear a seatbelt but probably wonāt, and take those chances of upsides or downsides into account when acting?
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnāt act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iām misunderstanding your view?
Iām a bit less confident of this in the case of metaethics, but it sounded like you were against taking even just moral uncertainty into account?
You might enjoy some of the posts tagged moral uncertainty, for shorter versions of some of the explanations and arguments, including my attempt to summarise ideas from MacAskillās thesis (which was later adapted into the book).
So I agree with you that we should apply expected value reasoning in most cases. The cases in which I donāt think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them.
I think we should apply expected value reasoning in ethics too. However, I donāt think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above.
However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However, the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications).
I am sorry if this isnāt expressed as clear as one should hope. Iāll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I havenāt considered yet.