(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?
“All” seems impossible, supposing the set of moral theories is indeed infinite; “whatever you like” seems to justify basically any act by just selecting or inventing the right subset of moral theories; “take the popular ones” seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)
(2) how should one assign probabilities to moral theories?
I realise that these are probably still controversial issues in philosophy, so I dont expect a solution. Rather, any (yet speculative) ideas on how to resolve them would be great!
(2) how should one assign probabilities to moral theories?
(I’ll again just provide some thoughts rather than actual, direct answers.)
Here I’d again say that I think an analogous question can be asked in the empirical context, and I think it’s decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don’t know if we have a rigorous theoretical understanding of how we do that, or of why that’s reasonable, or at least of how to do it in general. (I’m not an expert there, though.)
And I think there are some types of empirical claims where it’s pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that “an all-powerful god” exists?
What are the odds that “ghosts” exist?
What are the odds that “magic” exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I’m just saying it doesn’t seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe.
I’m not sure if this is how one should do it, but I think it’s a common approach, and it’s roughly what I’ve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it’s better to instead talk about degrees of robustness/resilience/trustworthiness of one’s probabilities. Very rough sketch: sometimes I might be very confident that there’s a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.
Yeah, I think those are both very thorny and important questions. I’d guess that no one would have amazing answers to them, but that various other EAs would have somewhat better answers than me. So I’ll just make a couple quick comments.
(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?
I think we could ask an analogous question about how to select which hypotheses about the world/future to use in one’s evaluation of the expected value of a given action, or just in evaluating what will happen in future in general. (I.e., in the empirical context, rather than the moral/normative context.)
For example, if I want to predict the expected number of readers of an article, I could think about how many readers it’ll get if X happens and how many it’ll get if Y happens, and then think about how likely X and Y seem. X and Y could be things like “Some unrelated major news event happens to happen on the day of publication, drawing readers away”, or “Some major news event that’s somewhat related to the topic of the article happens soon-ish after publication, boosting attention”, or “The article is featured in some newsletter/roundup.”
But how many hypotheses should I consider? What about pretty unlikely stuff, like Obama mentioning the article on TV? What about really outlandish stuff that we still can’t really assign a probability of precisely 0, like a new religion forming with that article as one of its sacred texts?
Now, that response doesn’t actually answer the question at all! I don’t know how this problem is addressed in the empirical context. But I imagine people have written and thought a bunch about it in that context, and that what they’ve said could probably be ported over into the moral context.
(It’s also possible that the analogy breaks down for some reason I haven’t considered.)
Hey, thank you very much for the summary!
I have two questions:
(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?
“All” seems impossible, supposing the set of moral theories is indeed infinite; “whatever you like” seems to justify basically any act by just selecting or inventing the right subset of moral theories; “take the popular ones” seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)
(2) how should one assign probabilities to moral theories?
I realise that these are probably still controversial issues in philosophy, so I dont expect a solution. Rather, any (yet speculative) ideas on how to resolve them would be great!
(I’ll again just provide some thoughts rather than actual, direct answers.)
Here I’d again say that I think an analogous question can be asked in the empirical context, and I think it’s decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don’t know if we have a rigorous theoretical understanding of how we do that, or of why that’s reasonable, or at least of how to do it in general. (I’m not an expert there, though.)
And I think there are some types of empirical claims where it’s pretty hard to say how we should do this.[1] For some examples I discussed in another post:
What are the odds that “an all-powerful god” exists?
What are the odds that “ghosts” exist?
What are the odds that “magic” exists?
What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I’m just saying it doesn’t seem immediately obvious how one does this.)
I do think this is all harder in the moral context, but some of the same basic principles may still apply.
In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe.
I’m not sure if this is how one should do it, but I think it’s a common approach, and it’s roughly what I’ve done myself.
[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it’s better to instead talk about degrees of robustness/resilience/trustworthiness of one’s probabilities. Very rough sketch: sometimes I might be very confident that there’s a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.
Glad you found the post useful :)
Yeah, I think those are both very thorny and important questions. I’d guess that no one would have amazing answers to them, but that various other EAs would have somewhat better answers than me. So I’ll just make a couple quick comments.
I think we could ask an analogous question about how to select which hypotheses about the world/future to use in one’s evaluation of the expected value of a given action, or just in evaluating what will happen in future in general. (I.e., in the empirical context, rather than the moral/normative context.)
For example, if I want to predict the expected number of readers of an article, I could think about how many readers it’ll get if X happens and how many it’ll get if Y happens, and then think about how likely X and Y seem. X and Y could be things like “Some unrelated major news event happens to happen on the day of publication, drawing readers away”, or “Some major news event that’s somewhat related to the topic of the article happens soon-ish after publication, boosting attention”, or “The article is featured in some newsletter/roundup.”
But how many hypotheses should I consider? What about pretty unlikely stuff, like Obama mentioning the article on TV? What about really outlandish stuff that we still can’t really assign a probability of precisely 0, like a new religion forming with that article as one of its sacred texts?
Now, that response doesn’t actually answer the question at all! I don’t know how this problem is addressed in the empirical context. But I imagine people have written and thought a bunch about it in that context, and that what they’ve said could probably be ported over into the moral context.
(It’s also possible that the analogy breaks down for some reason I haven’t considered.)