The Independence of Irrelevant Alternatives seems possibly false. Whether the move from A to A+ improves things depends on whether or not B is available. If B is available, then A>A+, and otherwise A<A+ (assuming no other options). The creation of the extra people in A+ creates further obligations if and only if there are options available in which they are better off. The extra people are owed more if and only if they could have been better off (possibly in a wide/nonidentity sense). This cuts against Dominance Addition and Transitivity, as stated, although note that rejecting IIA is compatible with transitivity within each option set, and DA in all option sets of size 2 and many others, just not all option sets.
The aggregation required in Non-anti-egalitarianism seems possibly false. It would mean it’s better to bring someone down from a very high welfare life to a marginal life for a barely noticeable improvement to arbitrarily many other individuals. Maybe this means killing or allowing to die earlier to marginally benefit a huge number of people. Even from a selfish POV, everyone may be willing to pay the tiny cost for even a tiny and almost negligible chance of getting out of the marginal range (although they would need to be “risk-loving” or there would need to be value lexicality). The aggregation also seems plausibly false within a life: suppose each moment of your very long life is marginally good, except for a small share of excellent moments. Would you replace your excellent moments with marginally good ones to marginally improve all of the marginal ones and marginally increase your overall average and total welfare?
The aggregation required in NAE depends on welfare being cardinally measurable on a common scale across all individuals so that we can take sums and averages, which seems possibly false.
It’s assumed there is positive welfare and net positive lives, contrary to negative axiologies, like antifrustrationism or in negative utilitariaism. Under negative axiologies, the extra lives are either already perfect, so the move from A+ to B doesn’t make sense, or they are very bad to add in aggregate, and plausibly outweigh the benefits to the original population in A in moving to A+, so A>A+. (Of course, there may be other repugnant conclusions for negative axiologies.)
Also, I just remembered that the possibility of positive value lexicality means that these three axioms alone do not imply the RC, because Non-anti-egalitarianism wouldn’t be applicable, as the unequal population with individuals past the lexical threshold has greater total and average welfare (in a sense, infinitely more). You need to separately rule out this kind of lexicality with another assumption (e.g. welfare is represented by (some subset of) the real numbers with the usual order and the usual operation of addition used to take the total and average), or replace NAE with something slightly different.
EDIT: Replaced “plausibly” with “possibly” to be clearer.
Re 2: I don’t think the standard issues with pure aggregation can be appealed to in the strongest version of non-anti-egalitarianism, because you can add arbitrarily many steps so that in each step the pool of beneficiaries and harmed are equal in size, and the per individual stakes of the beneficiaries is greater. Transitivity generally does imply pure aggregation for reasons like this, so it seems like in this case you’d want to deny transitivity (or again, IIA) instead, or else you’ll need to make a stronger and apparently costlier claim about how to trade off interests that isn’t unique to pure aggregation.
Interesting! That argument seems right, if you rule out positive value lexicality, so that you can guarantee the beneficiaries can gain more than the harmed lose, and this can all be done in finitely many steps.
Re 2: Your objection to non-anti egalitarianism can easily be chalked up to scope neglect.
World A—One person with an excellent life plus 999,999 people with neutral lives.
World B − 1,000,000 people with just above neutral lives.
Let’s use the veil of ignorance.
Would you prefer a 100% chance of a just above neutral life or a 1 in a million chance of an excellent life with a 99.9999% chance of a neutral life? I would definitely prefer the former.
Here is an alternative argument.
Surely, it would be moral to decrease the wellbeing of a happy person from +1000 to +999 to make 1,000 neutral people 1 unit better off; rejecting this is outrageously implausible.
If the process was repeated 1000 times, then it would be moral to bring a happy person down to neutrality to make a million neutral people 1 unit better off.
Whether the move from A to A+ improves things depends on whether or not B is available.
In my mind, whether A+ is better than A only depends on the goodness of the difference between them:
1) Increase in wellbeing for 10 M people.
2) Additional 10 M people with positive wellbeing.
I think both 1) and 2) are good, so their combination is also good. Do you consider 1) good, but 2) neutral? If so, I would argue the combination of something good with something neutral is something good.
Would you replace your excellent moments with marginally good ones to marginally improve all of the marginal ones and marginally increase your overall average and total welfare?
I definitely would, but I can see why my intuition would push against that. For a typical life expectancy, and magnitude of excellent moments, marginally increasing the goodness of the marginally good moments would probably not be enough to increase total welfare.
The aggregation required in NAE depends on welfare being cardinally measurable on a common scale across all individuals so that we can take sums and averages, which seems plausibly false.
I agree a common scale is required. However, at a fundamental level, I believe wellbeing is a function of the laws of physics and elementary particles. So, as all humans are subject to the same laws of physics, and have very similar compositions, I expect there is a (very complicated unknown) welfare function which applies quite well to almost all individuals.
It’s assumed there is positive welfare and net positive lives, contrary to negative axiologies, like antifrustrationism or in negative utilitariaism.
In 2 or 3 surveys in the UK, Kenya and Ghana, most people said they would prefer having lived their lives rather than not having been born (see Chapter 9 of What We Owe to the Future). Naturally, this does not mean that most people have net positive lives, but I would say it is strong evidence that it is possible to have net positive lives (e.g. I think there is at least 1 person in the world with a net positive life). If not in the present, at least in the future.
You need to separately rule out this kind of lexicality with another assumption (e.g. welfare is represented by (some subset of) the real numbers with the usual order and the usual operation of addition used to take the total and average), or replace NAE with something slightly different.
I agree, but I do not think that kind of lexicality is plausible. How can one produce something infinitely valuable from finite resources?
My main objections are:
The Independence of Irrelevant Alternatives seems possibly false. Whether the move from A to A+ improves things depends on whether or not B is available. If B is available, then A>A+, and otherwise A<A+ (assuming no other options). The creation of the extra people in A+ creates further obligations if and only if there are options available in which they are better off. The extra people are owed more if and only if they could have been better off (possibly in a wide/nonidentity sense). This cuts against Dominance Addition and Transitivity, as stated, although note that rejecting IIA is compatible with transitivity within each option set, and DA in all option sets of size 2 and many others, just not all option sets.
The aggregation required in Non-anti-egalitarianism seems possibly false. It would mean it’s better to bring someone down from a very high welfare life to a marginal life for a barely noticeable improvement to arbitrarily many other individuals. Maybe this means killing or allowing to die earlier to marginally benefit a huge number of people. Even from a selfish POV, everyone may be willing to pay the tiny cost for even a tiny and almost negligible chance of getting out of the marginal range (although they would need to be “risk-loving” or there would need to be value lexicality). The aggregation also seems plausibly false within a life: suppose each moment of your very long life is marginally good, except for a small share of excellent moments. Would you replace your excellent moments with marginally good ones to marginally improve all of the marginal ones and marginally increase your overall average and total welfare?
The aggregation required in NAE depends on welfare being cardinally measurable on a common scale across all individuals so that we can take sums and averages, which seems possibly false.
It’s assumed there is positive welfare and net positive lives, contrary to negative axiologies, like antifrustrationism or in negative utilitariaism. Under negative axiologies, the extra lives are either already perfect, so the move from A+ to B doesn’t make sense, or they are very bad to add in aggregate, and plausibly outweigh the benefits to the original population in A in moving to A+, so A>A+. (Of course, there may be other repugnant conclusions for negative axiologies.)
Also, I just remembered that the possibility of positive value lexicality means that these three axioms alone do not imply the RC, because Non-anti-egalitarianism wouldn’t be applicable, as the unequal population with individuals past the lexical threshold has greater total and average welfare (in a sense, infinitely more). You need to separately rule out this kind of lexicality with another assumption (e.g. welfare is represented by (some subset of) the real numbers with the usual order and the usual operation of addition used to take the total and average), or replace NAE with something slightly different.
EDIT: Replaced “plausibly” with “possibly” to be clearer.
Re 2: I don’t think the standard issues with pure aggregation can be appealed to in the strongest version of non-anti-egalitarianism, because you can add arbitrarily many steps so that in each step the pool of beneficiaries and harmed are equal in size, and the per individual stakes of the beneficiaries is greater. Transitivity generally does imply pure aggregation for reasons like this, so it seems like in this case you’d want to deny transitivity (or again, IIA) instead, or else you’ll need to make a stronger and apparently costlier claim about how to trade off interests that isn’t unique to pure aggregation.
Interesting! That argument seems right, if you rule out positive value lexicality, so that you can guarantee the beneficiaries can gain more than the harmed lose, and this can all be done in finitely many steps.
Sorry yeah, that was an unstated assumption of mine as well.
Re 2: Your objection to non-anti egalitarianism can easily be chalked up to scope neglect.
World A—One person with an excellent life plus 999,999 people with neutral lives.
World B − 1,000,000 people with just above neutral lives.
Let’s use the veil of ignorance.
Would you prefer a 100% chance of a just above neutral life or a 1 in a million chance of an excellent life with a 99.9999% chance of a neutral life? I would definitely prefer the former.
Here is an alternative argument.
Surely, it would be moral to decrease the wellbeing of a happy person from +1000 to +999 to make 1,000 neutral people 1 unit better off; rejecting this is outrageously implausible.
If the process was repeated 1000 times, then it would be moral to bring a happy person down to neutrality to make a million neutral people 1 unit better off.
Thanks for all the detailed explanations!
In my mind, whether A+ is better than A only depends on the goodness of the difference between them:
1) Increase in wellbeing for 10 M people.
2) Additional 10 M people with positive wellbeing.
I think both 1) and 2) are good, so their combination is also good. Do you consider 1) good, but 2) neutral? If so, I would argue the combination of something good with something neutral is something good.
I definitely would, but I can see why my intuition would push against that. For a typical life expectancy, and magnitude of excellent moments, marginally increasing the goodness of the marginally good moments would probably not be enough to increase total welfare.
I agree a common scale is required. However, at a fundamental level, I believe wellbeing is a function of the laws of physics and elementary particles. So, as all humans are subject to the same laws of physics, and have very similar compositions, I expect there is a (very complicated unknown) welfare function which applies quite well to almost all individuals.
In 2 or 3 surveys in the UK, Kenya and Ghana, most people said they would prefer having lived their lives rather than not having been born (see Chapter 9 of What We Owe to the Future). Naturally, this does not mean that most people have net positive lives, but I would say it is strong evidence that it is possible to have net positive lives (e.g. I think there is at least 1 person in the world with a net positive life). If not in the present, at least in the future.
I agree, but I do not think that kind of lexicality is plausible. How can one produce something infinitely valuable from finite resources?