I think this means that the rational metaethical choice for me is realism, and that I should believe this fully (unless I have reason to believe that there is further evidence that I havenât considered or donât understand) [emphasis added]
Are you saying you should act as though moral realism is 100% likely, even though you feel only slightly more convinced of it than of antirealism? That doesnât seem to make sense to me? It seems like the most reasonable approaches to metaethical uncertainty would involve considering not just âyour favourite theoryâ but also other theories you assign nontrivial credence to, analogous to the most reasonable-seeming approaches to moral uncertainty.
Cool! Thank you for the candid reply, and for taking this seriously. Yes, for questions such as these I think one should act as though the most likely theory is true. That is, my current view is contrary to McAskillâs view on this (I think). However, I havenât read his book, and there might be arguments there that would convince me if I had.
The most forceful considerations driving my own thinking on this comes from sceptical worries in epistemology. In typical âbrain in a vatâ scenarios, there are typically some slight considerations that tip in favor of realism about everything you believe. Similar worries appear in the case of conspiracy theories, where the mainstream view tends to have more and stronger supporting reasons, but in some cases it isnât obvious that the conspiracy is false, even though all things considered, one should believe that they are. These theories/âpropositions, as well as metaethical propositions are sometimes called hinge-propositions in philosophy, because entire worldviews hinge on them.
So empirically, I donât think that there is a way to act and believe in accordance with multiple worldviews at the same time. One may switch between worldviews, but it isnât possible to inhabit many worlds at the same time. Rationally, I donât think that one ought to act and believe in accordance with multiple worldviews, because they are likely to contradict each other in multiple ways, and would yield absurd implications if takes seriously. That is, absurd implications relative to everything else you believe, which is the ultimate grounds on which you judged the relative weights of the reasons bearing on the hinge proposition to start with. Thinking in this way is called epistemological coherentism in philosophy, and is a dominant view in contemporary epistemology. However, that does not mean itâs true, but it does mean that it should be taken seriously.
Hmm, I guess at first glance it seems like thatâs making moral uncertainty seem much weirder and harder than it really is. I think moral uncertainty can be pretty usefully seen as similar to empirical uncertainty in many ways. And on empirical matters, we constantly have some degree of credence in each of multiple contradictory possibilities, and thatâs clearly how it should be (rather than us being certain on any given empirical matter, e.g. whether itâll rain tomorrow or what the population of France is). Furthermore, we clearly shouldnât just act on whatâs most likely, but rather do something closer to expected value reasoning.
Thereâs debate over whether we should do precisely expected value reasoning in all cases, but itâs clear for example that itâd be a bad idea to accept a 49% chance of being tortured for 10 years in exchange for a 100% chance of getting a dollarâitâs clear we shouldnât think âWell, itâs unlikely weâll get tortured, so we should totally ignore that risk.â
And I donât think it feels weird or leads to absurdities or incoherence to simultaneously think I might get a job offer due to an application but probably wonât, or might die if I donât wear a seatbelt but probably wonât, and take those chances of upsides or downsides into account when acting?
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnât act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iâm misunderstanding your view?
Iâm a bit less confident of this in the case of metaethics, but it sounded like you were against taking even just moral uncertainty into account?
However, I havenât read his book, and there might be arguments there that would convince me if I had.
You might enjoy some of the posts tagged moral uncertainty, for shorter versions of some of the explanations and arguments, including my attempt to summarise ideas from MacAskillâs thesis (which was later adapted into the book).
So I agree with you that we should apply expected value reasoning in most cases. The cases in which I donât think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them.
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnât act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iâm misunderstanding your view?
I think we should apply expected value reasoning in ethics too. However, I donât think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above.
However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However, the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications).
I am sorry if this isnât expressed as clear as one should hope. Iâll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I havenât considered yet.
Are you saying you should act as though moral realism is 100% likely, even though you feel only slightly more convinced of it than of antirealism? That doesnât seem to make sense to me? It seems like the most reasonable approaches to metaethical uncertainty would involve considering not just âyour favourite theoryâ but also other theories you assign nontrivial credence to, analogous to the most reasonable-seeming approaches to moral uncertainty.
Cool! Thank you for the candid reply, and for taking this seriously. Yes, for questions such as these I think one should act as though the most likely theory is true. That is, my current view is contrary to McAskillâs view on this (I think). However, I havenât read his book, and there might be arguments there that would convince me if I had.
The most forceful considerations driving my own thinking on this comes from sceptical worries in epistemology. In typical âbrain in a vatâ scenarios, there are typically some slight considerations that tip in favor of realism about everything you believe. Similar worries appear in the case of conspiracy theories, where the mainstream view tends to have more and stronger supporting reasons, but in some cases it isnât obvious that the conspiracy is false, even though all things considered, one should believe that they are. These theories/âpropositions, as well as metaethical propositions are sometimes called hinge-propositions in philosophy, because entire worldviews hinge on them.
So empirically, I donât think that there is a way to act and believe in accordance with multiple worldviews at the same time. One may switch between worldviews, but it isnât possible to inhabit many worlds at the same time. Rationally, I donât think that one ought to act and believe in accordance with multiple worldviews, because they are likely to contradict each other in multiple ways, and would yield absurd implications if takes seriously. That is, absurd implications relative to everything else you believe, which is the ultimate grounds on which you judged the relative weights of the reasons bearing on the hinge proposition to start with. Thinking in this way is called epistemological coherentism in philosophy, and is a dominant view in contemporary epistemology. However, that does not mean itâs true, but it does mean that it should be taken seriously.
Hmm, I guess at first glance it seems like thatâs making moral uncertainty seem much weirder and harder than it really is. I think moral uncertainty can be pretty usefully seen as similar to empirical uncertainty in many ways. And on empirical matters, we constantly have some degree of credence in each of multiple contradictory possibilities, and thatâs clearly how it should be (rather than us being certain on any given empirical matter, e.g. whether itâll rain tomorrow or what the population of France is). Furthermore, we clearly shouldnât just act on whatâs most likely, but rather do something closer to expected value reasoning.
Thereâs debate over whether we should do precisely expected value reasoning in all cases, but itâs clear for example that itâd be a bad idea to accept a 49% chance of being tortured for 10 years in exchange for a 100% chance of getting a dollarâitâs clear we shouldnât think âWell, itâs unlikely weâll get tortured, so we should totally ignore that risk.â
And I donât think it feels weird or leads to absurdities or incoherence to simultaneously think I might get a job offer due to an application but probably wonât, or might die if I donât wear a seatbelt but probably wonât, and take those chances of upsides or downsides into account when acting?
Likewise, in the moral realm, if I thought it was 49% likely that a particular animal is a moral patient, then it seems clear to me that I shouldnât act in a way that would cause create suffering to the animal if so in exchange for just a small amount of pleasure for me.
Would you disagree with that? Maybe Iâm misunderstanding your view?
Iâm a bit less confident of this in the case of metaethics, but it sounded like you were against taking even just moral uncertainty into account?
You might enjoy some of the posts tagged moral uncertainty, for shorter versions of some of the explanations and arguments, including my attempt to summarise ideas from MacAskillâs thesis (which was later adapted into the book).
So I agree with you that we should apply expected value reasoning in most cases. The cases in which I donât think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them.
I think we should apply expected value reasoning in ethics too. However, I donât think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above.
However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However, the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications).
I am sorry if this isnât expressed as clear as one should hope. Iâll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I havenât considered yet.