The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
Does your view accept lexicality for very similar welfare levels?
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.