Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y.
This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine.
This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.
Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don’t know about Bykvist).
Thank you for clarifying!
This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine.
Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don’t know about Bykvist).