I see this criticism a lot, but I don’t understand where it cashes out. In the 50% case where moral realism is false, then the expected value of all actions is zero. So the expected value of our actions is determined only by what happens in the 50% case where moral realism is true, and shrinking the EV of all actions by 50% doesn’t change our ordering of which actions have the highest EV. More generally than EV-based moralities, any morality that proposes an ordering of actions will have that ordering unchanged by a <100% probability that moral realism is false. So why does it matter if moral realism is false with probability 1% or 50% or 99%?
Admittedly that is a good argument against the idea that moral realism actually matters too much, albeit I would say that the EV of your actions can be very different depending on your perspective (if moral realism is false).
Also, this is a case where non-consequentialist moralities fail badly at probability, because it’s asking for an infinite amount of evidence in order to update one’s view away from the ordering, which is equivalent to asking for mathematical proof that you’re wrong.
I see this criticism a lot, but I don’t understand where it cashes out. In the 50% case where moral realism is false, then the expected value of all actions is zero. So the expected value of our actions is determined only by what happens in the 50% case where moral realism is true, and shrinking the EV of all actions by 50% doesn’t change our ordering of which actions have the highest EV. More generally than EV-based moralities, any morality that proposes an ordering of actions will have that ordering unchanged by a <100% probability that moral realism is false. So why does it matter if moral realism is false with probability 1% or 50% or 99%?
Admittedly that is a good argument against the idea that moral realism actually matters too much, albeit I would say that the EV of your actions can be very different depending on your perspective (if moral realism is false).
Also, this is a case where non-consequentialist moralities fail badly at probability, because it’s asking for an infinite amount of evidence in order to update one’s view away from the ordering, which is equivalent to asking for mathematical proof that you’re wrong.