Ah, good! Hmm, then this means that you really find the arguments against normative realism convincing! That is quite interesting, I’ll delve into those links you mentioned sometime to have a look. As is often the case in philosophy, though, I suspect the low credence is explained not so much by the strength of the arguments, but by the understanding of the target concept or theory (normative realism). Especially in this case as you say that you are quite unsure what it even means. There are concepts of normativity that I would give a 0.01 credence to as well, but then there are also concepts of normativity which I think imply that normative realism is trivially true. It seems to me that you could square your commitments and restore coherence to your belief set by some good old fashioned conceptual analysis on the very notion of normativity itself. That is, anyways, what I would do in this epistemic state. I myself think that you can get most of the ethics in the column with quite modest concepts of normativity that is quite compatible with a modern scientific worldview!
I updated the links, thanks!
So I agree with you that we should apply expected value reasoning in most cases. The cases in which I don’t think we should use expected value reasoning are for hinge propositions. The propositions on which entire worldviews stand or fall, such as fundamental metaethical propositions for instance, or scientific paradigms. The reason these are special is that the grounds for belief in these propositions is also affected by believing them.
I think we should apply expected value reasoning in ethics too. However, I don’t think we should apply it to hinge propositions in ethics. The hinginess of a proposition is a matter of degree. The question of whether a particular animal is a moral patient does not seem very hingy to me, so if it was possible to assess the question in isolation I would not object to the way of thinking about it you sketch above.
However, logic binds questions like these into big bundles through the justifications we give for them. On the issue of animal moral patiency, I tend to think that there must be a property in human and non-human animals that justifies our moral attitudes towards them. Many think that this should be the capacity to feel pain, and so, if I think this, and think there is a 49% chance that the animal feels pain, then I should apply expected value reasoning when considering how to relate to the animal. However, the question of whether the capacity to feel pain is the central property we should use to navigate our moral lives, is hingier, and I think that it is less reasonable to apply expected value reasoning to this question (because this and reasonable alternatives leads to contradicting implications).
I am sorry if this isn’t expressed as clear as one should hope. I’ll have a proper look into your and MacAskills views on moral uncertainty at some point, then I might try to articulate all of this more clearly, and revise on the basis of the arguments I haven’t considered yet.