Sortition Model of Moral Uncertainty

Epistemic status: Not an expert on moral uncertainty.

The Model

There have been many models proposed to resolve moral uncertainty, but I would like to introduce one more. Instead of acting in accordance to the moral theory we are most confident in (my favorite theory) or making complex electoral systems (MEC, the parliamentary model), we might want to pick a moral theory at random. Just assign to every moral theory you know the probability of how confident you are about this theory, put them in a row from least to most likely (or any sequence really) and pick a random real number between 0 and 100. E.g: Say you have 1% credence in ā€˜Kantian ethicsā€™, 30.42% in ā€˜Average utilitarianismā€™ and 68.58% in ā€˜Total utilitarianismā€™ and you generate the random number 31, you will therefore pursue ā€˜Average utilitarianismā€™. Whenever you update your probabilities you can reroll the dice (another version would be that you reroll at a fixed frequency of intervals, e.g every day). Here are some of the advantages and disadvantages of this model.

The Good

  1. It represents your probabilities

  2. It is fair, every theory gets an equal chance

  3. It is easy to understand

  4. It is fast

  5. It stops a moral theory from dominating even though it only has slightly more credence than the second largest theory (e.g 49%, 50%)

  6. It stops a moral theory from dominating even though it has a minority credence (e.g 20%, 20%, 20%, 40%)

  7. It stops the problem of theory-individuation

  8. It has no need for intertheoretic comparisons of value

  9. It makes you less fanatical

  10. It is cognitively easy (no need to do complex calculations in your head)

The Bad

  1. Humans need to use something other than their brain (dice/ā€‹computers) to choose randomly (for an A.I this would not be a problem)

  2. Youā€™re not considering a lot of information about the moral theories. This could lead to you violating ā€œmoral dominanceā€ e.g picking a theory that decides on an option that it doesnā€™t have much stake in while another theory screams from the sideline (this problem could potentially be solved by making the ā€˜stakesā€™ an additional metric for deciding any given option, but that increases the complexity)

  3. It makes you more inconsistent and therefore harder to cooperate with

  4. Someone might get a wrong impression of you because they met you on a day of very low probability


Overall Iā€™m not really convinced this is the path to a better model of moral uncertainty (or value uncertainty, since this model could also be applied there). I think some variation of MEC is probably the best route. The reason I posted this was because:

  1. Maybe someone with more expertise in moral uncertainty could expand upon this model to make it better

  2. Maybe sortition elements could be included in other theories to improve them

  3. Maybe sortition elements could be included in other theories to make them more useful in practice, since sortition is so easy without sacrificing fairness