Wait a minute. Why should knowing that x_t and y_t are the utility of the same generation (in two different social states) not influence value judgements? There is certainly not anything unethical about that, and this is true also in a finite context. Let us say that society consists of three agents. Say that you are not necessarily a utilitarian and that you are given a choice between x=(1,3,4) and y=(0,2,3). You could say that x is better than y since all three members of society prefers state x to state y. But this assumes that you know that x_t and y_t give the utility of the same agent in the two states. If you did not know this, then things would be quite different. Do you see what I mean?
But this assumes that you know that xt and yt give the utility of the same agent in the two states. If you did not know this, then things would be quite different
No, you would know that there is a permutation of x which Pareto dominates y. This is enough for you to say that x>y.
I understand and accept your point though that people are not in practice selfless, and so if people wonder not “will someone be better off” but “will I specifically be better off” then (obviously) you can’t have anonymity.
Things would not be all that different with three agents. Sorry. But let me ask you: when you apply Suppes’ grading principle to infer that e.g. x=(1,3,4) is better than y=(2,0,3) since there is a permutation of x’ of x with x’>y, would you not say that you are relying on the idea that everyone is better off to conclude that x’ is better than y? I agree of course that criteria that depend on which state a specific person prefers are bad, and they cannot give us anonymity.
I agree that the immediate justification for the principle is “everyone is better off”, but as you correctly point out that implies knowing “identifying” information.
It is hard for me to justify this on consequentialist grounds though. Do you know of any justifications? Probably most consequentialist would just say that it increases total and average utility and leave it at that.
I am not sure what you mean by consequentialist grounds. Feel free to expand if you can.
I am actually writing something on the topic that we have been discussing. If you are interested I can send it to you when it is submittable. (This may take several months.)
Wait a minute. Why should knowing that x_t and y_t are the utility of the same generation (in two different social states) not influence value judgements? There is certainly not anything unethical about that, and this is true also in a finite context. Let us say that society consists of three agents. Say that you are not necessarily a utilitarian and that you are given a choice between x=(1,3,4) and y=(0,2,3). You could say that x is better than y since all three members of society prefers state x to state y. But this assumes that you know that x_t and y_t give the utility of the same agent in the two states. If you did not know this, then things would be quite different. Do you see what I mean?
No, you would know that there is a permutation of x which Pareto dominates y. This is enough for you to say that x>y.
I understand and accept your point though that people are not in practice selfless, and so if people wonder not “will someone be better off” but “will I specifically be better off” then (obviously) you can’t have anonymity.
Things would not be all that different with three agents. Sorry. But let me ask you: when you apply Suppes’ grading principle to infer that e.g. x=(1,3,4) is better than y=(2,0,3) since there is a permutation of x’ of x with x’>y, would you not say that you are relying on the idea that everyone is better off to conclude that x’ is better than y? I agree of course that criteria that depend on which state a specific person prefers are bad, and they cannot give us anonymity.
Thanks Lawrence, this is a good point.
I agree that the immediate justification for the principle is “everyone is better off”, but as you correctly point out that implies knowing “identifying” information.
It is hard for me to justify this on consequentialist grounds though. Do you know of any justifications? Probably most consequentialist would just say that it increases total and average utility and leave it at that.
I am not sure what you mean by consequentialist grounds. Feel free to expand if you can.
I am actually writing something on the topic that we have been discussing. If you are interested I can send it to you when it is submittable. (This may take several months.)
Good question; now that I try to explain it I think my definition of “consequentialist” was poorly defined.
I have changed my mind and agree with you – the argument for finite anonymity is weaker than I thought. Good to know!
I would be interested to hear your insights on these difficult problems, if you feel like sharing.