Sortition Model of Moral Uncertainty
Epistemic status: Not an expert on moral uncertainty.
The Model
There have been many models proposed to resolve moral uncertainty, but I would like to introduce one more. Instead of acting in accordance to the moral theory we are most confident in (my favorite theory) or making complex electoral systems (MEC, the parliamentary model), we might want to pick a moral theory at random. Just assign to every moral theory you know the probability of how confident you are about this theory, put them in a row from least to most likely (or any sequence really) and pick a random real number between 0 and 100. E.g: Say you have 1% credence in āKantian ethicsā, 30.42% in āAverage utilitarianismā and 68.58% in āTotal utilitarianismā and you generate the random number 31, you will therefore pursue āAverage utilitarianismā. Whenever you update your probabilities you can reroll the dice (another version would be that you reroll at a fixed frequency of intervals, e.g every day). Here are some of the advantages and disadvantages of this model.
The Good
It represents your probabilities
It is fair, every theory gets an equal chance
It is easy to understand
It is fast
It stops a moral theory from dominating even though it only has slightly more credence than the second largest theory (e.g 49%, 50%)
It stops a moral theory from dominating even though it has a minority credence (e.g 20%, 20%, 20%, 40%)
It stops the problem of theory-individuation
It has no need for intertheoretic comparisons of value
It makes you less fanatical
It is cognitively easy (no need to do complex calculations in your head)
The Bad
Humans need to use something other than their brain (dice/ācomputers) to choose randomly (for an A.I this would not be a problem)
Youāre not considering a lot of information about the moral theories. This could lead to you violating āmoral dominanceā e.g picking a theory that decides on an option that it doesnāt have much stake in while another theory screams from the sideline (this problem could potentially be solved by making the āstakesā an additional metric for deciding any given option, but that increases the complexity)
It makes you more inconsistent and therefore harder to cooperate with
Someone might get a wrong impression of you because they met you on a day of very low probability
Overall Iām not really convinced this is the path to a better model of moral uncertainty (or value uncertainty, since this model could also be applied there). I think some variation of MEC is probably the best route. The reason I posted this was because:
Maybe someone with more expertise in moral uncertainty could expand upon this model to make it better
Maybe sortition elements could be included in other theories to improve them
Maybe sortition elements could be included in other theories to make them more useful in practice, since sortition is so easy without sacrificing fairness
Interesting idea! However, not too sure about the simple version youāve presented. As you mention, the major problem is that it neglects information about āstakesā. You could try weighting the decision by the stakes somehow, but in cases where you have that information it seems strange to sometimes randomly and deliberately choose the option which is sub-optimal by the lights of MEC.
Also, as well as making you harder to cooperate with, inconsistent choices might over time lead you to choose a path which is worse than MEC by the lights of every theory you have some credence in. Maybe thereās an anology to empirical uncertainty: suppose Iāve hidden $10 inside one of two envelopes and fake money in the other. You can pay me $1 for either envelope, and Iāll also give you 100 further opportunities to pay me $1 to switch to the other one. Your credences are split 55%-45% between the envelopes. MEU would tell you to pick the slightly more likely envelope and be done with it. But, over the subsequent 100 chances to switch, the empirical analogue of your sortition model would just under half the time recommend paying me $1 to switch. In the end, youāre virtually guaranteed to lose money. Even picking the less likely envelope would represent a better strategy, as long as you stick to it. In other words, if youāre unsure between states of the world A and B, constantly switching between doing whatās best given A and doing whatās best given B could be worse in expectation than just coordinating all your choices around either A or B, irrespective of which is true. Iām wondering if the same is true where youāre uncertain between moral theories A and B.
That said, Iām pretty sure there are some interesting ideas about āstochastic choiceā in the empirical case which might be relevant. Folks who know more about decision theory might be able to speak to that!
Regarding stakes, I think OPās point is that itās not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories āswampingā the others, and that seems, in some sense, unfair. Bit like if youāre really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)
Iām not sure your point about money pumping works, at least as stated: youāre talking about a scenario where you lose money over successive choices. But what weāre interested in is moral value, and the sortition model will simply deny theirās a fixed amount of money in the envelope each time one ārollsā to see what oneās moral view is. Itās more like thereās $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think thatās a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.
I rather like the sortition modelāI donāt know if I buy it, but itās at least interesting and one option we should have on the tableāand I thank the OP for bringing it to my attention. I would flag the āworldview diversificationā model of moral uncertainty has a similar flavour, where you divide your resources into different ābucketsā depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.
Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/ālikelihood states/āevents. Thatās normally intuitively fine, but sometimes leads to āunfairnessā ā e.g. St. Petersburg Paradox and Pascalās wager /ā mugging.
Iām not entirely sure what youāre getting at re the envelopes, but thatās probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose youāre uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 ā by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!
What about integrating this into a Monte Carlo method?
I think you highlight some potentially good pros for this approach and I canāt say Iāve thoroughly analyzed this approach. However, quite a few of those pros seem non-unique to this particular model of moral uncertainty vs. other frameworks that acknowledge uncertainty and try to weigh the significance of the scenarios against each other. For example, such models already have the pros related to āIt stops a moral theory from dominating...,ā āit makes you less fanatical,ā etc. (but there are some seemingly unique āpros,ā such as āIt has no need for intertheoretic comparisons of valueā).
Still, I am highly skeptical of such a model even in comparison to just simply āgoing with whatever you are most confident inā because of things like complexity (among other things). More importantly, I think this model has a few serious problems along the lines of failing to weight the significance of the situation and thus wouldnāt perform well under basic expected value tests (which you might have been getting at with your point about choosing theories with low āstakeā): suppose your credences are 50% average utilitarian, 50% total utilitarian. You are presented with a situation where choice A mildly improves average utility such as by severely restricting some populationās growth rate (imagine itās for animals)--but this is drastically bad from a total utilitarian viewpoint in comparison to choice B (do nothing /ā allow the population to rise). To use simple numbers, we could be talking about choice A = +5,-100 (utility points under āaverage, totalā), vs. choice B = 0,0. If the decisionmaker is operating on average utilitarianism, it would be drastically bad. This is why (to my understanding), when your educated intuition says you have the time, knowledge, etc. to do some beneficial analysis, you should try to weight and compare the significance of the situations under different moral frameworks.