My biggest worry is with the assumption that resources are distributed among moral theories in proportion to the agent’s credences in the moral theories. It seems to me that this is an outcome that should be derived from a framework for decision-making under moral uncertainty, not something to be assumed at the outset
I don’t think I understand the thinking here. It seems fairly natural to say “I am 80% confident in theory A, so that gets 80% of my resources, etc.”, and then to think about what would happen after that. It’s not intuitive to say “I am 80% confident in utilitarianism, that gets 80% ‘bargaining power’”. But I accept it’s an open question, if we want to do something internal bargaining, what the best version of that is.
One concern with IB that you don’t mention is that the NBS depends on a “disagreement point” but it’s not clear what this disagreement point should be. The disagreement point represents the utilities obtained if the bargainers fail to reach an agreement. I think the random dictator disagreement point in Greaves and Cotton-Barratt (2019) seems quite natural for many decision problems, but I think this dependence on a disagreement point counts against bargaining approaches.
I do mention the challenge of the disagreement point (see footnote 7). Again, I agree that this is the sort of thing that merits further inquiry. I’m not sold on the ‘random dictator point’, which, if I understood correctly, is identical to running a lottery where each theory has a X% chance of getting their top choice (where X% represents your credence in that theory). I note in part of section 2 that bargaining agents will likely think it preferable, by their own lights, to bargain over time rather than resolve things with lotteries. It’s for this reason I’m also inclined to prefer a ‘moral marketplace’ over a ‘moral parliament’: the former is what the sub-agents would themselves prefer.
I don’t think I understand the thinking here. It seems fairly natural to say “I am 80% confident in theory A, so that gets 80% of my resources, etc.”, and then to think about what would happen after that. It’s not intuitive to say “I am 80% confident in utilitarianism, that gets 80% ‘bargaining power’”. But I accept it’s an open question, if we want to do something internal bargaining, what the best version of that is.
I do mention the challenge of the disagreement point (see footnote 7). Again, I agree that this is the sort of thing that merits further inquiry. I’m not sold on the ‘random dictator point’, which, if I understood correctly, is identical to running a lottery where each theory has a X% chance of getting their top choice (where X% represents your credence in that theory). I note in part of section 2 that bargaining agents will likely think it preferable, by their own lights, to bargain over time rather than resolve things with lotteries. It’s for this reason I’m also inclined to prefer a ‘moral marketplace’ over a ‘moral parliament’: the former is what the sub-agents would themselves prefer.