Hi Stijn! Thanks for writing this—I completely agree that getting population ethics on surer footing is an important issue for the EA community. And I agree with your diagnosis that it’s super difficult.
I’m wondering if there is a similar dominance argument you could apply to John Broome’s argument against the “Intuition of Neutrality”. Basically, imagine we’re in some World A, where Worlds B & C are available and they only differ in that 1 extra person exists with utility within this range of indifference (“neutrality” in Broome’s words) in both worlds, however in World C her utility is higher than in B. Standing at the vantage point of A, we’re indifferent between B and C (since her utility doesn’t count in either); but C is dominated by B in that this new person has a better life and no one is affected. Very curious if layering the ‘range of indifference’ with a dominance criteria can be shown to always escape conclusions like the one above where C>B when compared to one another, but C~B when compared from A.
Apologies if that wasn’t clear! And perhaps your dynamic consistency problem is analogous… I just didn’t see it as immediately obvious and haven’t spent enough time thinking through the details. Thanks again for writing such a detailed post on this!
thanks for the comment. My theory mostly violates that neutrality principle: all else equal, adding a person to the world who has a negative welfare is bad, adding a person who has a welfare higher than treshold T is good, and in its lexical extension, adding a person with welfare between 0 and threshold T, is good (the lexical extension says that if two states are equally good when it comes to the total welfare excluding the welfare of possible people between 0 and T, then the state that has the highest total welfare, including that of all possible people, is the best).
There is indeed an apparent intransitivity in my theory, which is not a real or serious intransitivity, as it is avoided in the same way as that dynamic inconsistency is avoided, namely by considering the choise sets. So, worlds A, B and C are equally good when you consider the full choice set {A,B,C}, but once that extra person is added, the choice set reduces to {B,C}, and then C is better than B (the extra person becomes a necessary person in choice set {B,C}). The crucial thing is that the ‘better than’ relationship depends on the choice set, the set of all available states. This excludes the serious ‘money pump’ intransitivities. In the full choice set {A,B,C}, I am indifferent between A and B, so I’m willing to switch from A to B. Now I prefer C over B (because that extra person has a higher welfare in C), and hence I’m willing to pay to switch from B to C. But as the choice set is now reduced to {B,C}, after choosing C, I can no longer switch back to A, even if I was initially indifferent between C and A. In the lexical extension of my theory, I would end up with world C.
Hi Stijn! Thanks for writing this—I completely agree that getting population ethics on surer footing is an important issue for the EA community. And I agree with your diagnosis that it’s super difficult.
I’m wondering if there is a similar dominance argument you could apply to John Broome’s argument against the “Intuition of Neutrality”. Basically, imagine we’re in some World A, where Worlds B & C are available and they only differ in that 1 extra person exists with utility within this range of indifference (“neutrality” in Broome’s words) in both worlds, however in World C her utility is higher than in B. Standing at the vantage point of A, we’re indifferent between B and C (since her utility doesn’t count in either); but C is dominated by B in that this new person has a better life and no one is affected. Very curious if layering the ‘range of indifference’ with a dominance criteria can be shown to always escape conclusions like the one above where C>B when compared to one another, but C~B when compared from A.
Apologies if that wasn’t clear! And perhaps your dynamic consistency problem is analogous… I just didn’t see it as immediately obvious and haven’t spent enough time thinking through the details. Thanks again for writing such a detailed post on this!
Hi Kevin,
thanks for the comment. My theory mostly violates that neutrality principle: all else equal, adding a person to the world who has a negative welfare is bad, adding a person who has a welfare higher than treshold T is good, and in its lexical extension, adding a person with welfare between 0 and threshold T, is good (the lexical extension says that if two states are equally good when it comes to the total welfare excluding the welfare of possible people between 0 and T, then the state that has the highest total welfare, including that of all possible people, is the best).
There is indeed an apparent intransitivity in my theory, which is not a real or serious intransitivity, as it is avoided in the same way as that dynamic inconsistency is avoided, namely by considering the choise sets. So, worlds A, B and C are equally good when you consider the full choice set {A,B,C}, but once that extra person is added, the choice set reduces to {B,C}, and then C is better than B (the extra person becomes a necessary person in choice set {B,C}). The crucial thing is that the ‘better than’ relationship depends on the choice set, the set of all available states. This excludes the serious ‘money pump’ intransitivities. In the full choice set {A,B,C}, I am indifferent between A and B, so I’m willing to switch from A to B. Now I prefer C over B (because that extra person has a higher welfare in C), and hence I’m willing to pay to switch from B to C. But as the choice set is now reduced to {B,C}, after choosing C, I can no longer switch back to A, even if I was initially indifferent between C and A. In the lexical extension of my theory, I would end up with world C.
Thanks! Makes sense.