(2) how should one assign probabilities to moral theories?
(I’ll again just provide some thoughts rather than actual, direct answers.)
Here I’d again say that I think an analogous question can be asked in the empirical context, and I think it’s decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don’t know if we have a rigorous theoretical understanding of how we do that, or of why that’s reasonable, or at least of how to do it in general. (I’m not an expert there, though.)
And I think there are some types of empirical claims where it’s pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that “an all-powerful god” exists?
What are the odds that “ghosts” exist?
What are the odds that “magic” exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I’m just saying it doesn’t seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe.
I’m not sure if this is how one should do it, but I think it’s a common approach, and it’s roughly what I’ve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it’s better to instead talk about degrees of robustness/resilience/trustworthiness of one’s probabilities. Very rough sketch: sometimes I might be very confident that there’s a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.
(I’ll again just provide some thoughts rather than actual, direct answers.)
Here I’d again say that I think an analogous question can be asked in the empirical context, and I think it’s decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don’t know if we have a rigorous theoretical understanding of how we do that, or of why that’s reasonable, or at least of how to do it in general. (I’m not an expert there, though.)
And I think there are some types of empirical claims where it’s pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that “an all-powerful god” exists?
What are the odds that “ghosts” exist?
What are the odds that “magic” exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I’m just saying it doesn’t seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe.
I’m not sure if this is how one should do it, but I think it’s a common approach, and it’s roughly what I’ve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it’s better to instead talk about degrees of robustness/resilience/trustworthiness of one’s probabilities. Very rough sketch: sometimes I might be very confident that there’s a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.