Regarding stakes, I think OP’s point is that it’s not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories “swamping” the others, and that seems, in some sense, unfair. Bit like if you’re really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)
I’m not sure your point about money pumping works, at least as stated: you’re talking about a scenario where you lose money over successive choices. But what we’re interested in is moral value, and the sortition model will simply deny their’s a fixed amount of money in the envelope each time one ‘rolls’ to see what one’s moral view is. It’s more like there’s $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think that’s a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.
I rather like the sortition model—I don’t know if I buy it, but it’s at least interesting and one option we should have on the table—and I thank the OP for bringing it to my attention. I would flag the “worldview diversification” model of moral uncertainty has a similar flavour, where you divide your resources into different ‘buckets’ depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.
Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/likelihood states/events. That’s normally intuitively fine, but sometimes leads to ‘unfairness’ — e.g. St. Petersburg Paradox and Pascal’s wager / mugging.
I’m not entirely sure what you’re getting at re the envelopes, but that’s probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose you’re uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 — by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!
Regarding stakes, I think OP’s point is that it’s not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories “swamping” the others, and that seems, in some sense, unfair. Bit like if you’re really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)
I’m not sure your point about money pumping works, at least as stated: you’re talking about a scenario where you lose money over successive choices. But what we’re interested in is moral value, and the sortition model will simply deny their’s a fixed amount of money in the envelope each time one ‘rolls’ to see what one’s moral view is. It’s more like there’s $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think that’s a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.
I rather like the sortition model—I don’t know if I buy it, but it’s at least interesting and one option we should have on the table—and I thank the OP for bringing it to my attention. I would flag the “worldview diversification” model of moral uncertainty has a similar flavour, where you divide your resources into different ‘buckets’ depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.
Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/likelihood states/events. That’s normally intuitively fine, but sometimes leads to ‘unfairness’ — e.g. St. Petersburg Paradox and Pascal’s wager / mugging.
I’m not entirely sure what you’re getting at re the envelopes, but that’s probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose you’re uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 — by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!