(2) how should one assign probabilities to moral theories?
(Iāll again just provide some thoughts rather than actual, direct answers.)
Here Iād again say that I think an analogous question can be asked in the empirical context, and I think itās decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I donāt know if we have a rigorous theoretical understanding of how we do that, or of why thatās reasonable, or at least of how to do it in general. (Iām not an expert there, though.)
And I think there are some types of empirical claims where itās pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that āan all-powerful godā exists?
What are the odds that āghostsā exist?
What are the odds that āmagicā exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; Iām just saying it doesnāt seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/āoften update on what other people believe.
Iām not sure if this is how one should do it, but I think itās a common approach, and itās roughly what Iāve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and itās better to instead talk about degrees of robustness/āresilience/ātrustworthiness of oneās probabilities. Very rough sketch: sometimes I might be very confident that thereās a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.
(Iāll again just provide some thoughts rather than actual, direct answers.)
Here Iād again say that I think an analogous question can be asked in the empirical context, and I think itās decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I donāt know if we have a rigorous theoretical understanding of how we do that, or of why thatās reasonable, or at least of how to do it in general. (Iām not an expert there, though.)
And I think there are some types of empirical claims where itās pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that āan all-powerful godā exists?
What are the odds that āghostsā exist?
What are the odds that āmagicā exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; Iām just saying it doesnāt seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/āoften update on what other people believe.
Iām not sure if this is how one should do it, but I think itās a common approach, and itās roughly what Iāve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and itās better to instead talk about degrees of robustness/āresilience/ātrustworthiness of oneās probabilities. Very rough sketch: sometimes I might be very confident that thereās a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.