Epistemic status: Not an expert on moral uncertainty.
The Model
There have been many models proposed to resolve moral uncertainty, but I would like to introduce one more. Instead of acting in accordance to the moral theory we are most confident in (my favorite theory) or making complex electoral systems (MEC, the parliamentary model), we might want to pick a moral theory at random. Just assign to every moral theory you know the probability of how confident you are about this theory, put them in a row from least to most likely (or any sequence really) and pick a random real number between 0 and 100. E.g: Say you have 1% credence in āKantian ethicsā, 30.42% in āAverage utilitarianismā and 68.58% in āTotal utilitarianismā and you generate the random number 31, you will therefore pursue āAverage utilitarianismā. Whenever you update your probabilities you can reroll the dice (another version would be that you reroll at a fixed frequency of intervals, e.g every day). Here are some of the advantages and disadvantages of this model.
The Good
It represents your probabilities
It is fair, every theory gets an equal chance
It is easy to understand
It is fast
It stops a moral theory from dominating even though it only has slightly more credence than the second largest theory (e.g 49%, 50%)
It stops a moral theory from dominating even though it has a minority credence (e.g 20%, 20%, 20%, 40%)
It stops the problem of theory-individuation
It has no need for intertheoretic comparisons of value
It makes you less fanatical
It is cognitively easy (no need to do complex calculations in your head)
The Bad
Humans need to use something other than their brain (dice/ācomputers) to choose randomly (for an A.I this would not be a problem)
Youāre not considering a lot of information about the moral theories. This could lead to you violating āmoral dominanceā e.g picking a theory that decides on an option that it doesnāt have much stake in while another theory screams from the sideline (this problem could potentially be solved by making the āstakesā an additional metric for deciding any given option, but that increases the complexity)
It makes you more inconsistent and therefore harder to cooperate with
Someone might get a wrong impression of you because they met you on a day of very low probability
Overall Iām not really convinced this is the path to a better model of moral uncertainty (or value uncertainty, since this model could also be applied there). I think some variation of MEC is probably the best route. The reason I posted this was because:
Maybe someone with more expertise in moral uncertainty could expand upon this model to make it better
Maybe sortition elements could be included in other theories to improve them
Maybe sortition elements could be included in other theories to make them more useful in practice, since sortition is so easy without sacrificing fairness
Sortition Model of Moral Uncertainty
Epistemic status: Not an expert on moral uncertainty.
The Model
There have been many models proposed to resolve moral uncertainty, but I would like to introduce one more. Instead of acting in accordance to the moral theory we are most confident in (my favorite theory) or making complex electoral systems (MEC, the parliamentary model), we might want to pick a moral theory at random. Just assign to every moral theory you know the probability of how confident you are about this theory, put them in a row from least to most likely (or any sequence really) and pick a random real number between 0 and 100. E.g: Say you have 1% credence in āKantian ethicsā, 30.42% in āAverage utilitarianismā and 68.58% in āTotal utilitarianismā and you generate the random number 31, you will therefore pursue āAverage utilitarianismā. Whenever you update your probabilities you can reroll the dice (another version would be that you reroll at a fixed frequency of intervals, e.g every day). Here are some of the advantages and disadvantages of this model.
The Good
It represents your probabilities
It is fair, every theory gets an equal chance
It is easy to understand
It is fast
It stops a moral theory from dominating even though it only has slightly more credence than the second largest theory (e.g 49%, 50%)
It stops a moral theory from dominating even though it has a minority credence (e.g 20%, 20%, 20%, 40%)
It stops the problem of theory-individuation
It has no need for intertheoretic comparisons of value
It makes you less fanatical
It is cognitively easy (no need to do complex calculations in your head)
The Bad
Humans need to use something other than their brain (dice/ācomputers) to choose randomly (for an A.I this would not be a problem)
Youāre not considering a lot of information about the moral theories. This could lead to you violating āmoral dominanceā e.g picking a theory that decides on an option that it doesnāt have much stake in while another theory screams from the sideline (this problem could potentially be solved by making the āstakesā an additional metric for deciding any given option, but that increases the complexity)
It makes you more inconsistent and therefore harder to cooperate with
Someone might get a wrong impression of you because they met you on a day of very low probability
Overall Iām not really convinced this is the path to a better model of moral uncertainty (or value uncertainty, since this model could also be applied there). I think some variation of MEC is probably the best route. The reason I posted this was because:
Maybe someone with more expertise in moral uncertainty could expand upon this model to make it better
Maybe sortition elements could be included in other theories to improve them
Maybe sortition elements could be included in other theories to make them more useful in practice, since sortition is so easy without sacrificing fairness