Interesting idea! However, not too sure about the simple version you’ve presented. As you mention, the major problem is that it neglects information about ‘stakes’. You could try weighting the decision by the stakes somehow, but in cases where you have that information it seems strange to sometimes randomly and deliberately choose the option which is sub-optimal by the lights of MEC.
Also, as well as making you harder to cooperate with, inconsistent choices might over time lead you to choose a path which is worse than MEC by the lights of every theory you have some credence in. Maybe there’s an anology to empirical uncertainty: suppose I’ve hidden $10 inside one of two envelopes and fake money in the other. You can pay me $1 for either envelope, and I’ll also give you 100 further opportunities to pay me $1 to switch to the other one. Your credences are split 55%-45% between the envelopes. MEU would tell you to pick the slightly more likely envelope and be done with it. But, over the subsequent 100 chances to switch, the empirical analogue of your sortition model would just under half the time recommend paying me $1 to switch. In the end, you’re virtually guaranteed to lose money. Even picking the less likely envelope would represent a better strategy, as long as you stick to it. In other words, if you’re unsure between states of the world A and B, constantly switching between doing what’s best given A and doing what’s best given B could be worse in expectation than just coordinating all your choices around either A or B, irrespective of which is true. I’m wondering if the same is true where you’re uncertain between moral theories A and B.
That said, I’m pretty sure there are some interesting ideas about ‘stochastic choice’ in the empirical case which might be relevant. Folks who know more about decision theory might be able to speak to that!
Regarding stakes, I think OP’s point is that it’s not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories “swamping” the others, and that seems, in some sense, unfair. Bit like if you’re really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)
I’m not sure your point about money pumping works, at least as stated: you’re talking about a scenario where you lose money over successive choices. But what we’re interested in is moral value, and the sortition model will simply deny their’s a fixed amount of money in the envelope each time one ‘rolls’ to see what one’s moral view is. It’s more like there’s $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think that’s a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.
I rather like the sortition model—I don’t know if I buy it, but it’s at least interesting and one option we should have on the table—and I thank the OP for bringing it to my attention. I would flag the “worldview diversification” model of moral uncertainty has a similar flavour, where you divide your resources into different ‘buckets’ depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.
Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/likelihood states/events. That’s normally intuitively fine, but sometimes leads to ‘unfairness’ — e.g. St. Petersburg Paradox and Pascal’s wager / mugging.
I’m not entirely sure what you’re getting at re the envelopes, but that’s probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose you’re uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 — by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!