Re 2: Your objection to non-anti egalitarianism can easily be chalked up to scope neglect.
World A—One person with an excellent life plus 999,999 people with neutral lives.
World B − 1,000,000 people with just above neutral lives.
Let’s use the veil of ignorance.
Would you prefer a 100% chance of a just above neutral life or a 1 in a million chance of an excellent life with a 99.9999% chance of a neutral life? I would definitely prefer the former.
Here is an alternative argument.
Surely, it would be moral to decrease the wellbeing of a happy person from +1000 to +999 to make 1,000 neutral people 1 unit better off; rejecting this is outrageously implausible.
If the process was repeated 1000 times, then it would be moral to bring a happy person down to neutrality to make a million neutral people 1 unit better off.
Re 2: Your objection to non-anti egalitarianism can easily be chalked up to scope neglect.
World A—One person with an excellent life plus 999,999 people with neutral lives.
World B − 1,000,000 people with just above neutral lives.
Let’s use the veil of ignorance.
Would you prefer a 100% chance of a just above neutral life or a 1 in a million chance of an excellent life with a 99.9999% chance of a neutral life? I would definitely prefer the former.
Here is an alternative argument.
Surely, it would be moral to decrease the wellbeing of a happy person from +1000 to +999 to make 1,000 neutral people 1 unit better off; rejecting this is outrageously implausible.
If the process was repeated 1000 times, then it would be moral to bring a happy person down to neutrality to make a million neutral people 1 unit better off.