This worry about internal bargaining & moral parliament approaches strikes me as entirely well-taken. The worry basically being that on the most obvious way of implementing these proposals for dealing with moral uncertainty, it looks as though a decision maker will wind up governed by the dead hand of her past empirical evidence. Suppose outcome X looks unlikely, and my subagents A and B strike a deal that benefits A (in which I have low credence) just in case X obtains. Now I find out that outcome X only looked unlikely because my old evidence wasn’t so great. In fact, X now obtains! Should I now do as A recommends, even though I have much higher credence in the moral theory that B represents?
What I think this worry shows us is that we should not implement the internal bargaining proposal in the most obvious way. Instead, we should implement it in a slightly less obvious way. I outline what I have in mind more formally in work in progress that I’d be very happy to share with anyone interested, but here’s the basic idea: the ‘contracts’ between subagents / parliamentarians that a decision maker should regard herself as bound by in the present choice situation are not the contracts that those subagents / parliamentarians agreed to earlier in the decision maker’s lifetime based on the evidence that she had then. Instead, the decision maker should regard herself as bound by the contracts that those subagents would have agreed earlier in the decision maker’s lifetime if they had then had the empirical evidence that they have now. This should resolve Wei_Dai’s puzzle.
Here’s a comment on the value of moral information. First a quibble: the setup in the Wise Philanthropist case strikes me as slightly strange. If one thinks that studying more would improve one’s evidence, and one suspects that studying more would increase one’s credence in A relative to B, then this in itself should already be shifting one’s credence from B to A (cf. e.g. Briggs 2009).
Still, I think that the feature of Wise Philanthropist that I am quibbling about here is inessential to the worry that the internal bargaining approach will lead us to undervalue moral information. Suppose that I’m 50% confident in moral theory T1, 50% confident in T2, and that I can learn for some small fee $x from some oracle which theory is in fact correct. Intuitively, I should consult the oracle. But if the T1 and T2 subagents each think that there’s a 50% chance the oracle will endorse T1 and a 50% chance she’ll endorse T2, then each subagent might well think that she has just as much to lose by consulting the oracle as she has to gain, so might prefer not to spend the $x.
Fortunately for the internal bargaining theory, I think that Michael (the OP) opens himself up to these results only by being unfaithful to the motivating idea of internal bargaining. The motivating idea is that each subagent is certain in the moral theory that she represents. But in that case, in the oracle example the T1 and T2 subagents should each be certain that the oracle will endorse their preferred theory! Each subagent would then be willing to pay quite a lot in order to consult the oracle. Hence—as is intuitive—it is indeed appropriate for the uncertain decision maker to do so.
(FYI, I develop this line of thought more formally in work in progress that I’d be very happy to share with anyone interested :) )