One is that views of the “making people happy” variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives.
It depends if by valuing “making people happy” one means 1) intrinsically valuing adding happiness to existing people’s lives, or 2) valuing “making them happy” in the sense of relieving their suffering (practically, this is often what happiness does for people). I agree that violations of transitivity or IIA seem inevitable for views of type (1), and that’s pretty bad.
But (2) is an alternative that I think has gotten weirdly sidelined in (EA) population axiology discourse. If some person is completely content and has no frustrated desires (state A), I don’t see any moral obligation to make them happier (state B), so I don’t violate transitivity by saying the world is not better by adding person A and also not better by adding person B. I suspect lots of people’s “person-affecting” intuitions really boil down to the intuition that preferences that don’t exist—and will not exist—have no need to be fulfilled, as you allude to in your last big paragraph:
A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn’t brought about, so it doesn’t provide reasons to that world in the same way
It depends if by valuing “making people happy” one means 1) intrinsically valuing adding happiness to existing people’s lives, or 2) valuing “making them happy” in the sense of relieving their suffering (practically, this is often what happiness does for people). I agree that violations of transitivity or IIA seem inevitable for views of type (1), and that’s pretty bad.
But (2) is an alternative that I think has gotten weirdly sidelined in (EA) population axiology discourse. If some person is completely content and has no frustrated desires (state A), I don’t see any moral obligation to make them happier (state B), so I don’t violate transitivity by saying the world is not better by adding person A and also not better by adding person B. I suspect lots of people’s “person-affecting” intuitions really boil down to the intuition that preferences that don’t exist—and will not exist—have no need to be fulfilled, as you allude to in your last big paragraph: