I’m aware of Moral Uncertainty and the moral parliament model, as well as this (uncomplete) sequence by MichaelA, but I’m not sure what concrete actions moral uncertainty entails.
What specific actions should someone take if they are highly uncertain about the validity of different ethical theories?
Avoid the actions that are endorsed by your favoured moral theory but most severely violate other moral theories
But for every action that is considered moral under one moral theory, there is an equal, and opposite moral theory that says that action is not moral.
Maybe instead of just other moral theories, it would have to be a ‘significant moral theory’ based on some metric(like popularity?). But that has it’s flaws too.
I think I may check out that book and sequence to get a feel of what’s already been thought about on this subject
I think the idea is to assign credences to plausible theories, where plausible is taken to mean some subset of the following:
Has been argued for in good faith by professional philosophers
Has relevant and well-reasoned arguments in favour of it
Accords at least partially with moral intuitions
Is consistent/​parsimonious/​not metaphysically untoward/​precise/​ etc (the usual desiderata for explanations/​theories)
Concerns the usual domain of moral theories(values, agents, decisions, etc)
Another equivalent way to proceed is to consider all possible theories, but the credence given to the (completely) implausible theories is 0 or sufficiently close to it.
Probably something like striving for a Long Reflection process. (Due to complex cluelessness more generally, not just moral uncertainty.)
The real issue is unrealistic levels of coordination and a assumption that moral objectivism is true. While it is an operating assumption in order to do anything in EA, that doesn’t equal that’s it’s true.