The problem I see with this solution is it violates some combination of completeness and transitivity.
Or just additivity/āseparability. One such view is rank-discounted utilitarianism:
Maximize āNi=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), uiā¤ui+1, and 0<r<1. A strict negative version might assume uiā¤0.
In this case, there are many thresholds, and they depend on othersā utilities and r.
For what itās worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large ābackgroundā population. Or if you treat solipsism like moral uncertainty and donāt just take expected values right through it. Still, I donāt find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, Iām having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any vā¤uā¤0 , there is some (finite) N large enough that
Nāi=0riv=v1ārN+11ār<u11ār=āāi=0riu
because the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
Okay, still a bit confused by it but the objections youāve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a personās suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. Iām sufficiently confused about the math that I canāt really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitiveāif I understand this every single type of suffering canāt be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what itās worth, I donāt find this counterintuitive.
if it has lexicality everywhere that seems especially counterintuitiveāif I understand this every single type of suffering canāt be outweighed by large amounts of smaller amounts of suffering.
It seems intuitive to me at least for sufficiently distant welfare levels, although itās a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldnāt be weird to me at all.
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I havenāt found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/āutilitarianism or negative lexical threshold prioritarianism/āutilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Not in the specific example Iām thinking of, because Iām imaging either the uās happening or the vās happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).
Or just additivity/āseparability. One such view is rank-discounted utilitarianism:
Maximize āNi=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), uiā¤ui+1, and 0<r<1. A strict negative version might assume uiā¤0.
In this case, there are many thresholds, and they depend on othersā utilities and r.
For what itās worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large ābackgroundā population. Or if you treat solipsism like moral uncertainty and donāt just take expected values right through it. Still, I donāt find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, Iām having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any vā¤uā¤0 , there is some (finite) N large enough that
Nāi=0riv=v1ārN+11ār<u11ār=āāi=0riubecause the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
*countably
Okay, still a bit confused by it but the objections youāve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a personās suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. Iām sufficiently confused about the math that I canāt really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitiveāif I understand this every single type of suffering canāt be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what itās worth, I donāt find this counterintuitive.
It seems intuitive to me at least for sufficiently distant welfare levels, although itās a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldnāt be weird to me at all.
Does your view accept lexicality for very similar welfare levels?
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I havenāt found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/āutilitarianism or negative lexical threshold prioritarianism/āutilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Should the right-hand-side sum start at i=N+1 rather than i=0, because the utilities at level v occupy the i=0 to i=N slots?
Not in the specific example Iām thinking of, because Iām imaging either the uās happening or the vās happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).