The problem I see with this solution is it violates some combination of completeness and transitivity.
Or just additivity/separability. One such view is rank-discounted utilitarianism:
Maximize ∑Ni=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), ui≤ui+1, and 0<r<1. A strict negative version might assume ui≤0.
In this case, there are many thresholds, and they depend on others’ utilities and r.
For what it’s worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large “background” population. Or if you treat solipsism like moral uncertainty and don’t just take expected values right through it. Still, I don’t find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, I’m having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any v≤u≤0 , there is some (finite) N large enough that
N∑i=0riv=v1−rN+11−r<u11−r=∞∑i=0riu
because the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
Okay, still a bit confused by it but the objections you’ve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a person’s suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. I’m sufficiently confused about the math that I can’t really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Not in the specific example I’m thinking of, because I’m imaging either the u‘s happening or the v’s happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).
Or just additivity/separability. One such view is rank-discounted utilitarianism:
Maximize ∑Ni=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), ui≤ui+1, and 0<r<1. A strict negative version might assume ui≤0.
In this case, there are many thresholds, and they depend on others’ utilities and r.
For what it’s worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large “background” population. Or if you treat solipsism like moral uncertainty and don’t just take expected values right through it. Still, I don’t find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, I’m having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any v≤u≤0 , there is some (finite) N large enough that
N∑i=0riv=v1−rN+11−r<u11−r=∞∑i=0riubecause the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
*countably
Okay, still a bit confused by it but the objections you’ve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a person’s suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. I’m sufficiently confused about the math that I can’t really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
Does your view accept lexicality for very similar welfare levels?
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Should the right-hand-side sum start at i=N+1 rather than i=0, because the utilities at level v occupy the i=0 to i=N slots?
Not in the specific example I’m thinking of, because I’m imaging either the u‘s happening or the v’s happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).