I’m not so sure what to say about 2., but I want to note in response to 1. that although the Property Rights Theory (PRT) that I propose does not require any intertheoretic comparisons of choiceworthiness, it nonetheless licences a certain kind of stakes sensitivity. PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
Harry R. Lloyd
The property rights approach to moral uncertainty
Hmm, I don’t think that we need for any theory-agents to change their minds when the oracle delivers her pronouncement though—we just need for the theory-agents’ resource endowments to be sensitive to what the oracle says. We can think of all the same theory-agents still hanging around after the oracle delivers her pronouncement, still just as certain in the theory they represent—it’s just that now only one of them ever gets endowed with any resources.
There are two questions to be distinguished here. (1) Does adding 1 year to the life of a wealthy person in the USA increase their well-being more than adding 1 year to the life of a poor person in Kenya would increase their well-being? (2) Does adding 1 year to the life of a wealthy person in the USA increase overall moral value more than adding 1 year to the life of a poor person in Kenya would? Your reply seems to be addressed to question (1), but the original discussion and my comment concern question (2). If the distribution of welfare makes a difference to overall moral value, then the answer to (2) might be ‘no’ even if the answer to (1) is ‘yes’.
A quick response to the last few paragraphs of section 2.1.2, on Open Philanthropy’s view that making the value of saving a year of life depend on the well-being of the saved person would have “the intuitively unacceptable implication that saving lives in richer countries would, other things being equal, be more valuable on the grounds that such people are richer and so better off.”
The post comments that this “objection probably relates to a sense of fairness. It’s unfair to benefit someone simply because they are lucky enough to be better off. But OP (and others) tend to ignore fairness; the aim is just to do the most good.” On one reading of ‘fairness’ however—viz. as referring to justice in the distribution of well-being—fairness need not be in tension with doing the most good. On the contrary, fairness of this kind could be one of the factors that determines how good outcomes are.
For example, people with prioritarian, egalitarian, or desertist theories of the good might claim that benefitting someone who is badly off by giving her x units of extra lifetime well-being might contribute just as much to the good as benefitting someone who is already well off by giving her 2x units of extra lifetime well-being.
The post goes on to comment that a commitment to fairness would indicate “a tension in OP’s thinking. OP holds it is better, all things equal, to save someone in their 20s than in their 70s (I discuss this further in Section 2.2). Presumably, this is because the benefit to the younger person is greater. But, don’t happier people gain a greater benefit from an extra year of life than less happy people? If so, how can it be consistent to conclude we should account for quantity when assessing the value of saving lives, but not quality?”
However, far from being in tension with the claim that it better to save the 20 year old than the 70 year old, a commitment to fairness in the distributive justice sense would actually reinforce this claim. Someone (with a life worth living) who has lived to 70 has already accumulated 70 years worth of lifetime well-being, and so is much better off than someone who has only accumulated 20 years of lifetime well-being. Distributive justice in well-being would favour aiding the 20-year-old rather than the 70-year-old.
This worry about internal bargaining & moral parliament approaches strikes me as entirely well-taken. The worry basically being that on the most obvious way of implementing these proposals for dealing with moral uncertainty, it looks as though a decision maker will wind up governed by the dead hand of her past empirical evidence. Suppose outcome X looks unlikely, and my subagents A and B strike a deal that benefits A (in which I have low credence) just in case X obtains. Now I find out that outcome X only looked unlikely because my old evidence wasn’t so great. In fact, X now obtains! Should I now do as A recommends, even though I have much higher credence in the moral theory that B represents?
What I think this worry shows us is that we should not implement the internal bargaining proposal in the most obvious way. Instead, we should implement it in a slightly less obvious way. I outline what I have in mind more formally in work in progress that I’d be very happy to share with anyone interested, but here’s the basic idea: the ‘contracts’ between subagents / parliamentarians that a decision maker should regard herself as bound by in the present choice situation are not the contracts that those subagents / parliamentarians agreed to earlier in the decision maker’s lifetime based on the evidence that she had then. Instead, the decision maker should regard herself as bound by the contracts that those subagents would have agreed earlier in the decision maker’s lifetime if they had then had the empirical evidence that they have now. This should resolve Wei_Dai’s puzzle.
Here’s a comment on the value of moral information. First a quibble: the setup in the Wise Philanthropist case strikes me as slightly strange. If one thinks that studying more would improve one’s evidence, and one suspects that studying more would increase one’s credence in A relative to B, then this in itself should already be shifting one’s credence from B to A (cf. e.g. Briggs 2009).
Still, I think that the feature of Wise Philanthropist that I am quibbling about here is inessential to the worry that the internal bargaining approach will lead us to undervalue moral information. Suppose that I’m 50% confident in moral theory T1, 50% confident in T2, and that I can learn for some small fee $x from some oracle which theory is in fact correct. Intuitively, I should consult the oracle. But if the T1 and T2 subagents each think that there’s a 50% chance the oracle will endorse T1 and a 50% chance she’ll endorse T2, then each subagent might well think that she has just as much to lose by consulting the oracle as she has to gain, so might prefer not to spend the $x.
Fortunately for the internal bargaining theory, I think that Michael (the OP) opens himself up to these results only by being unfaithful to the motivating idea of internal bargaining. The motivating idea is that each subagent is certain in the moral theory that she represents. But in that case, in the oracle example the T1 and T2 subagents should each be certain that the oracle will endorse their preferred theory! Each subagent would then be willing to pay quite a lot in order to consult the oracle. Hence—as is intuitive—it is indeed appropriate for the uncertain decision maker to do so.
(FYI, I develop this line of thought more formally in work in progress that I’d be very happy to share with anyone interested :) )
Forgive me for failing to notice this comment until now Michael! Although this response might not be speaking directly to your idea of ‘robustly positive portfolios’, I do just want to point out that there is a fairly substantive sense in which in the Property Rights Theory as I have already described it, theory-agents ‘internalize negative (and positive) externalities.’ It’s just an instance of the Coase Theorem. Suppose that agent A is endowed with some decision right; some ways of exercising that decision right are regarded by agent B as harmful, whereas others are not; and B can pay A not to use the decision right in the ways that B regards as harmful. In this case, the opportunity cost to A of choosing to use the decision right in a way that B regards as harmful will include the cost of losing the side payment that A could have collected from B in return for using that decision right in a way that B does not regard as harmful. So, the negative externality is internalised.