I will try to make the question more specific and then answer it. Suppose you are given two sequences x=(x_1,x_2,…) and y=(y_1,y_2,…) and that you are told that x_t is not necessarily the utility of generation t, but that it could be the utility of some other generation. Should your judgements then be invariant under infinite permutations? Well, it depends. Suppose I know that x_t and y_t is the utility of the same generation – but not necessarily of generation t. Then I would still say that x is better than y if x_t>y_t for every t. Infinite anonymity in its strongest form (the one you called intergenerational equity) does not allow you to make such judgements. (See my response to your second question below.) In this case I would agree to the strongest form of relative anonymity however. If I do not know that x_t and y_t give the utility of the same generation, then I would agree to infinite anonymity. So the answer is that sure, as you change the structure of the problem, different invariance conditions will become appropriate.
Thank you for the clarification and references – it took me a few days to read and understand those papers.
I don’t think there are any strong ways in which we disagree. Prima facie, prioritizing the lives of older (or younger) people seems wrong, so statements like “I know that xt and yt is the utility of the same generation” don’t seem like they should influence your value judgments. However, lots of bizarre things occur if we act that way, so in reflective equilibrium we may wish to prioritize the lives of older people.
Wait a minute. Why should knowing that x_t and y_t are the utility of the same generation (in two different social states) not influence value judgements? There is certainly not anything unethical about that, and this is true also in a finite context. Let us say that society consists of three agents. Say that you are not necessarily a utilitarian and that you are given a choice between x=(1,3,4) and y=(0,2,3). You could say that x is better than y since all three members of society prefers state x to state y. But this assumes that you know that x_t and y_t give the utility of the same agent in the two states. If you did not know this, then things would be quite different. Do you see what I mean?
But this assumes that you know that xt and yt give the utility of the same agent in the two states. If you did not know this, then things would be quite different
No, you would know that there is a permutation of x which Pareto dominates y. This is enough for you to say that x>y.
I understand and accept your point though that people are not in practice selfless, and so if people wonder not “will someone be better off” but “will I specifically be better off” then (obviously) you can’t have anonymity.
Things would not be all that different with three agents. Sorry. But let me ask you: when you apply Suppes’ grading principle to infer that e.g. x=(1,3,4) is better than y=(2,0,3) since there is a permutation of x’ of x with x’>y, would you not say that you are relying on the idea that everyone is better off to conclude that x’ is better than y? I agree of course that criteria that depend on which state a specific person prefers are bad, and they cannot give us anonymity.
I agree that the immediate justification for the principle is “everyone is better off”, but as you correctly point out that implies knowing “identifying” information.
It is hard for me to justify this on consequentialist grounds though. Do you know of any justifications? Probably most consequentialist would just say that it increases total and average utility and leave it at that.
I am not sure what you mean by consequentialist grounds. Feel free to expand if you can.
I am actually writing something on the topic that we have been discussing. If you are interested I can send it to you when it is submittable. (This may take several months.)
By the way, one version of what you might be saying is: “both infinite anonymity and the overtaking criterion seem like reasonable conditions. But it turns out that they conflict, and the overtaking criterion seems more reasonable, so we should drop infinite anonymity.” I would agree with that sentiment.
Forget overtaking. Infinite anonymity (in its strongest form – the one you called intergenerational equity) is incompatible with the following requirement: if everyone is better off in state x=(x_1,x_2,..) than in state y=(y_1,y_2,..), then x is better than y. See e.g. the paper by Fleurbaey and Michel (2003).
Fair enough. Let me phrase it this way: suppose you were blinded to the location of people in time. Do you agree that infinite anonymity would hold?
I will try to make the question more specific and then answer it. Suppose you are given two sequences x=(x_1,x_2,…) and y=(y_1,y_2,…) and that you are told that x_t is not necessarily the utility of generation t, but that it could be the utility of some other generation. Should your judgements then be invariant under infinite permutations? Well, it depends. Suppose I know that x_t and y_t is the utility of the same generation – but not necessarily of generation t. Then I would still say that x is better than y if x_t>y_t for every t. Infinite anonymity in its strongest form (the one you called intergenerational equity) does not allow you to make such judgements. (See my response to your second question below.) In this case I would agree to the strongest form of relative anonymity however. If I do not know that x_t and y_t give the utility of the same generation, then I would agree to infinite anonymity. So the answer is that sure, as you change the structure of the problem, different invariance conditions will become appropriate.
Thank you for the clarification and references – it took me a few days to read and understand those papers.
I don’t think there are any strong ways in which we disagree. Prima facie, prioritizing the lives of older (or younger) people seems wrong, so statements like “I know that xt and yt is the utility of the same generation” don’t seem like they should influence your value judgments. However, lots of bizarre things occur if we act that way, so in reflective equilibrium we may wish to prioritize the lives of older people.
Wait a minute. Why should knowing that x_t and y_t are the utility of the same generation (in two different social states) not influence value judgements? There is certainly not anything unethical about that, and this is true also in a finite context. Let us say that society consists of three agents. Say that you are not necessarily a utilitarian and that you are given a choice between x=(1,3,4) and y=(0,2,3). You could say that x is better than y since all three members of society prefers state x to state y. But this assumes that you know that x_t and y_t give the utility of the same agent in the two states. If you did not know this, then things would be quite different. Do you see what I mean?
No, you would know that there is a permutation of x which Pareto dominates y. This is enough for you to say that x>y.
I understand and accept your point though that people are not in practice selfless, and so if people wonder not “will someone be better off” but “will I specifically be better off” then (obviously) you can’t have anonymity.
Things would not be all that different with three agents. Sorry. But let me ask you: when you apply Suppes’ grading principle to infer that e.g. x=(1,3,4) is better than y=(2,0,3) since there is a permutation of x’ of x with x’>y, would you not say that you are relying on the idea that everyone is better off to conclude that x’ is better than y? I agree of course that criteria that depend on which state a specific person prefers are bad, and they cannot give us anonymity.
Thanks Lawrence, this is a good point.
I agree that the immediate justification for the principle is “everyone is better off”, but as you correctly point out that implies knowing “identifying” information.
It is hard for me to justify this on consequentialist grounds though. Do you know of any justifications? Probably most consequentialist would just say that it increases total and average utility and leave it at that.
I am not sure what you mean by consequentialist grounds. Feel free to expand if you can.
I am actually writing something on the topic that we have been discussing. If you are interested I can send it to you when it is submittable. (This may take several months.)
Good question; now that I try to explain it I think my definition of “consequentialist” was poorly defined.
I have changed my mind and agree with you – the argument for finite anonymity is weaker than I thought. Good to know!
I would be interested to hear your insights on these difficult problems, if you feel like sharing.
By the way, one version of what you might be saying is: “both infinite anonymity and the overtaking criterion seem like reasonable conditions. But it turns out that they conflict, and the overtaking criterion seems more reasonable, so we should drop infinite anonymity.” I would agree with that sentiment.
Forget overtaking. Infinite anonymity (in its strongest form – the one you called intergenerational equity) is incompatible with the following requirement: if everyone is better off in state x=(x_1,x_2,..) than in state y=(y_1,y_2,..), then x is better than y. See e.g. the paper by Fleurbaey and Michel (2003).