I’ll first respond to the first article you linked. The problem I see with this solution is it violates some combination of completeness and transitivity. Vinding says that we can say that for this list (1 e′-object, 1 e-object, 2 e-objects, 3 e-objects) we can say that 3e-objects are categorically worse than any number of 1e’ objects but that some number of 1e’ objects can be worse than 1e-objects, which can be worse than 2 e-objects, and so on. This runs into an issue.
If we say that 1000 e’ objects are worse than 1 e-object and 1000 e-objects are worse than 1 2 e-object, and 1000 2e-objects are worse than 1 3e-object than we get the following inequality
The fifth example runs into a similar problem to the one addressed in this post. We can just apply the calculation at the level of populations. Surely inflicting 10000 units of pain on one person is less bad than inflicting 9999 units of pain on 10^100^100 people.
The second article that you linked runs into a similar problem. It says that what matters is the experience rather than the temperature—thus, it claims that steadily lowering the temperature and asking the NU at what point they’d pinpoint a firm threshold is misleading. However, we can use units of pain rather than temperature. While it’s hard to precisely quantify units of pain, we have an intuitive grasp of very bad pain, and we can similarly grasp the pain being lessened slightly.
Next, Vinding argues consent views avoid this problem. Consent views run into several issues.
1 Contrary to Vinding’s indication, there is in fact a firm point at which people no longer consent. For any people, if offered googolplex utils per second unit of torture, there is a firm point at which they would stop consenting. The consent views would have to say that misery slightly above this threshold categorically outweighs misery slightly below this threshold.
2 Consent views seem to argue that the badness of pain has to do with the weakness of will (or more specifically, the willingness of people to endure pain, independent of the badness of the pain). For example, suppose we have a strict negative utilitarian who argues that no amount of pain is worth any amount of pleasure. This person would never consent. However, it seems bad to say that a pinprick for this person is considerably worse than a pinprick for someone else, who experience the same amount of pain.
3 It seems we can at least imagine a type of pleasure which one would not consent to ceasing. A person experiencing unfathomable joy might be willing to endure future torture for one more moment of bliss. Thus, this view seems to imply caring about pleasure as well. Love songs often contain sentiments relating to the singer being willing to endure anything for another moment of being with the object of the love song. However, it seems strange to say that love is lexically better than all other goods.
Next, the repugnant conclusions of positive utilitarianism are presented, including creating hell to please the blissful. This is a bullet I’m very happy to bite. Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy. I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience—from this perspective it does not seem particularly counterintuitive. Additionally, as I argued in the article, we suck at multiplying. Hypotheticals involving vast numbers melt our intuitions.
Additionally, as I argue here, many of our intuitions that go against positive utilitarianism crumble upon reflection. I’ll try to argue against the creating hell objection. Presumably a pinprick to please the blissful would be permissible, assuming enough blissful people. As the argument I’ve presented shows, a lot of pinpricks are as bad as hell. Thus, enough pleasure for the blissful is worth hell.
I agree that we should be careful to consider the full implications of all else equal, however, I don’t think that refutes any part of the argument I’ve presented. When people experience joy, even when they’re not suffering at all, they regard more joy as desirable.
You argue for axiological monism, that seems fully consistent with utilitarianism. Much like there are positive and negative numbers, there are positive and negative experiences. It seems that we regard the positive experiences as good, much like how we regard the negative experiences as bad.
It seems very strange for well-being to be morally neutral, but suffering to be morally bad. If you were imagining a world without having had any experiences, it seems clear one would expect the enjoyable experiences to be good, and the unenjoyable experiences to be bad. Evolutionarily, the reason we suffer is to deter actions, while the reason we feel pleasure is to encourage actions. Whatever mechanism causes suffering to be bad, a similar explanation would seem to cause well-being to be good.
Thanks for the comment, you’ve given me some interesting things to consider!
Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy. I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience—from this perspective it does not seem particularly counterintuitive.
Interesting, I have the exact opposite intuition. i.e. To the extent that it seems to me clearly wrong to inflict misery on some people to add (non-palliative) joy to others, I conclude that I shouldn’t endorse my intuitions about egoistically being willing to accept my own great misery for more joy. Though, such intuitions aren’t really strong for me in the first place. And when I imagine experiencing the sum total of sentient experience, my inclination to prioritize suffering gets stronger, not weaker.
ETA: A lot of disagreements about axiology and population ethics have this same dynamic. You infer from your intuition that pinpricks to please the blissful is acceptable that we can scale this up to torture to (more intensely) please many more blissful people. I infer from the intuitive absurdity of the latter that maybe we shouldn’t think it’s so obvious that pinpricks to please the blissful is good. I don’t know how to adjudicate this, but I find that people who dismiss NU pretty consistently seem to assume their choice in “modus ponens vs reductio ad absurdum” is the obvious one.
That’s interesting, it seemed very obvious that it’s worth it to endure very brief misery for greater overall pleasure. Surely if given the option to experience 10 seconds of intense misery for a trillion fold increase in your future joy, it seems obvious to me that this would be worth it. However, even if you conclude it wouldn’t, that doesn’t address the main objection I gave in this post.
Got it. That objection doesn’t apply to purely additive NU, which I’m more sympathetic to and which you dismissed as “trivially false.” Basically my response to your argument there is: If these googolplex “utils” are created de novo or provided to beings who are already totally free from suffering, including having no frustrated desire for the utils, why should I care about their nonexistence when pain is at stake—even mild pain?
More than that, though, while I understand why you find the pinprick conclusion absurd, my view is that the available alternatives are even worse. i.e., Either accepting a lexical threshold vulnerable to your continuity argument, or accepting that any arbitrarily horrible degree of suffering can be morally outweighed by enough happiness (or anything else). When I reflect on just how bad “arbitrarily horrible” can get, indeed even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity, I have to say that last option seems more absurd than pure NU’s flaws. It seems like the least-bad way to reconcile continuity with the intuition I notice from that reflection.
(I prefer not to go much further down this rabbit hole because I’ve had this same debate many times, and it unfortunately just seems to keep coming down to bedrock intuitions. I also have mixed thoughts on the sign of value spreading. Suffice it to say I think it’s still valuable to give some information about why some of us don’t find pure NU trivially false. If you’re curious for more details, I recommend Section 1 of this post I wrote, this comment, and Tomasik’s “Three Types of Negative Utilitarianism.” Incidentally I’m working on a blog post responding to your last objection, the error theory based on empirical asymmetries and scope neglect. The “even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity” thing I said above is a teaser. Happy to share when it’s finished!)
Okay. One question would be whether you share my intuitions in the case I posed to Brian Tomasik. For reference here it is. “Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. ”
Before reflection, sure, that seems like a worthy trade.
But the trichotomy posed in “Three Types of NU,” which I noted in the second paragraph of my last comment, seems inescapable. Suppose I accept it as morally good to inflict small pain along with lots of superhappiness, and reject lexicality (though I don’t think this is off the table, despite the continuity arguments). Then I’d have to conclude that any degree of horrible experience has its price. That doesn’t just seem absurd, it flies in the face of what ethics just is to me. Sufficiently intense suffering just seems morally serious in a way that nothing else is. If that doesn’t resonate with you, I’m stumped.
Well I think I grasp the force of the initial intuition. I just abandon it upon reflection. I have a strong intuition that extreme suffering is very very bad. I don’t have the intuition that it’s badness can’t be outweighed by anything else, regardless of what the other thing is.
Thanks. :) When I imagine moderate (not unbearable) pains versus moderate pleasures experienced by different people, my intuition is that creating a small number of new moderate pleasures that wouldn’t otherwise exist doesn’t outweigh a single moderate pain, but there’s probably a large enough number (maybe thousands?) of newly created moderate pleasures that outweighs a moderate pain. I guess that would imply weak NU using this particular thought experiment. (Other thought experiments may yield different conclusions.)
Why think pleasure and suffering are measurable on the same hedonistic scale? They use pretty different parts of the brain. People can make preference-based tradeoffs between anything, so the fact that they make tradeoffs between pleasure and suffering doesn’t clearly establish that there’s a single hedonistic scale.
For further related discussion, see some writing by Adam Shriver:
The problem I see with this solution is it violates some combination of completeness and transitivity.
Or just additivity/separability. One such view is rank-discounted utilitarianism:
Maximize ∑Ni=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), ui≤ui+1, and 0<r<1. A strict negative version might assume ui≤0.
In this case, there are many thresholds, and they depend on others’ utilities and r.
For what it’s worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large “background” population. Or if you treat solipsism like moral uncertainty and don’t just take expected values right through it. Still, I don’t find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, I’m having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any v≤u≤0 , there is some (finite) N large enough that
N∑i=0riv=v1−rN+11−r<u11−r=∞∑i=0riu
because the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
Okay, still a bit confused by it but the objections you’ve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a person’s suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. I’m sufficiently confused about the math that I can’t really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Not in the specific example I’m thinking of, because I’m imaging either the u‘s happening or the v’s happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).
I’ll first respond to the first article you linked. The problem I see with this solution is it violates some combination of completeness and transitivity. Vinding says that we can say that for this list (1 e′-object, 1 e-object, 2 e-objects, 3 e-objects) we can say that 3e-objects are categorically worse than any number of 1e’ objects but that some number of 1e’ objects can be worse than 1e-objects, which can be worse than 2 e-objects, and so on. This runs into an issue.
If we say that 1000 e’ objects are worse than 1 e-object and 1000 e-objects are worse than 1 2 e-object, and 1000 2e-objects are worse than 1 3e-object than we get the following inequality
1 trillion e’ objects > 1 billion e—objects> 1 million 2e-objects>1000 3e-objects.
The fifth example runs into a similar problem to the one addressed in this post. We can just apply the calculation at the level of populations. Surely inflicting 10000 units of pain on one person is less bad than inflicting 9999 units of pain on 10^100^100 people.
The second article that you linked runs into a similar problem. It says that what matters is the experience rather than the temperature—thus, it claims that steadily lowering the temperature and asking the NU at what point they’d pinpoint a firm threshold is misleading. However, we can use units of pain rather than temperature. While it’s hard to precisely quantify units of pain, we have an intuitive grasp of very bad pain, and we can similarly grasp the pain being lessened slightly.
Next, Vinding argues consent views avoid this problem. Consent views run into several issues.
1 Contrary to Vinding’s indication, there is in fact a firm point at which people no longer consent. For any people, if offered googolplex utils per second unit of torture, there is a firm point at which they would stop consenting. The consent views would have to say that misery slightly above this threshold categorically outweighs misery slightly below this threshold.
2 Consent views seem to argue that the badness of pain has to do with the weakness of will (or more specifically, the willingness of people to endure pain, independent of the badness of the pain). For example, suppose we have a strict negative utilitarian who argues that no amount of pain is worth any amount of pleasure. This person would never consent. However, it seems bad to say that a pinprick for this person is considerably worse than a pinprick for someone else, who experience the same amount of pain.
3 It seems we can at least imagine a type of pleasure which one would not consent to ceasing. A person experiencing unfathomable joy might be willing to endure future torture for one more moment of bliss. Thus, this view seems to imply caring about pleasure as well. Love songs often contain sentiments relating to the singer being willing to endure anything for another moment of being with the object of the love song. However, it seems strange to say that love is lexically better than all other goods.
Next, the repugnant conclusions of positive utilitarianism are presented, including creating hell to please the blissful. This is a bullet I’m very happy to bite. Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy. I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience—from this perspective it does not seem particularly counterintuitive. Additionally, as I argued in the article, we suck at multiplying. Hypotheticals involving vast numbers melt our intuitions.
Additionally, as I argue here, many of our intuitions that go against positive utilitarianism crumble upon reflection. I’ll try to argue against the creating hell objection. Presumably a pinprick to please the blissful would be permissible, assuming enough blissful people. As the argument I’ve presented shows, a lot of pinpricks are as bad as hell. Thus, enough pleasure for the blissful is worth hell.
I agree that we should be careful to consider the full implications of all else equal, however, I don’t think that refutes any part of the argument I’ve presented. When people experience joy, even when they’re not suffering at all, they regard more joy as desirable.
You argue for axiological monism, that seems fully consistent with utilitarianism. Much like there are positive and negative numbers, there are positive and negative experiences. It seems that we regard the positive experiences as good, much like how we regard the negative experiences as bad.
It seems very strange for well-being to be morally neutral, but suffering to be morally bad. If you were imagining a world without having had any experiences, it seems clear one would expect the enjoyable experiences to be good, and the unenjoyable experiences to be bad. Evolutionarily, the reason we suffer is to deter actions, while the reason we feel pleasure is to encourage actions. Whatever mechanism causes suffering to be bad, a similar explanation would seem to cause well-being to be good.
Thanks for the comment, you’ve given me some interesting things to consider!
Interesting, I have the exact opposite intuition. i.e. To the extent that it seems to me clearly wrong to inflict misery on some people to add (non-palliative) joy to others, I conclude that I shouldn’t endorse my intuitions about egoistically being willing to accept my own great misery for more joy. Though, such intuitions aren’t really strong for me in the first place. And when I imagine experiencing the sum total of sentient experience, my inclination to prioritize suffering gets stronger, not weaker.
ETA: A lot of disagreements about axiology and population ethics have this same dynamic. You infer from your intuition that pinpricks to please the blissful is acceptable that we can scale this up to torture to (more intensely) please many more blissful people. I infer from the intuitive absurdity of the latter that maybe we shouldn’t think it’s so obvious that pinpricks to please the blissful is good. I don’t know how to adjudicate this, but I find that people who dismiss NU pretty consistently seem to assume their choice in “modus ponens vs reductio ad absurdum” is the obvious one.
That’s interesting, it seemed very obvious that it’s worth it to endure very brief misery for greater overall pleasure. Surely if given the option to experience 10 seconds of intense misery for a trillion fold increase in your future joy, it seems obvious to me that this would be worth it. However, even if you conclude it wouldn’t, that doesn’t address the main objection I gave in this post.
What do you consider the main objection?
The one I explained in the post starting with “This view runs into a problem.”
Got it. That objection doesn’t apply to purely additive NU, which I’m more sympathetic to and which you dismissed as “trivially false.” Basically my response to your argument there is: If these googolplex “utils” are created de novo or provided to beings who are already totally free from suffering, including having no frustrated desire for the utils, why should I care about their nonexistence when pain is at stake—even mild pain?
More than that, though, while I understand why you find the pinprick conclusion absurd, my view is that the available alternatives are even worse. i.e., Either accepting a lexical threshold vulnerable to your continuity argument, or accepting that any arbitrarily horrible degree of suffering can be morally outweighed by enough happiness (or anything else). When I reflect on just how bad “arbitrarily horrible” can get, indeed even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity, I have to say that last option seems more absurd than pure NU’s flaws. It seems like the least-bad way to reconcile continuity with the intuition I notice from that reflection.
(I prefer not to go much further down this rabbit hole because I’ve had this same debate many times, and it unfortunately just seems to keep coming down to bedrock intuitions. I also have mixed thoughts on the sign of value spreading. Suffice it to say I think it’s still valuable to give some information about why some of us don’t find pure NU trivially false. If you’re curious for more details, I recommend Section 1 of this post I wrote, this comment, and Tomasik’s “Three Types of Negative Utilitarianism.” Incidentally I’m working on a blog post responding to your last objection, the error theory based on empirical asymmetries and scope neglect. The “even just reflecting on bad experiences for which there exist happy experiences of matching or greater intensity” thing I said above is a teaser. Happy to share when it’s finished!)
Okay. One question would be whether you share my intuitions in the case I posed to Brian Tomasik. For reference here it is. “Hmm, this may be a case of divergent intuitions but to me it seems very obvious that if we could make it so that at the end of people’s lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so. In this case it avoids the objection that well-being is only desirable instrumentally, because this is a form of well-being that would have otherwise not been even been considered. That seems far more obvious than any more specific claims about the amount of well-being needed to offset a unit of suffering, particularly because of the trickiness of intuitions dealing with very large numbers. ”
Before reflection, sure, that seems like a worthy trade.
But the trichotomy posed in “Three Types of NU,” which I noted in the second paragraph of my last comment, seems inescapable. Suppose I accept it as morally good to inflict small pain along with lots of superhappiness, and reject lexicality (though I don’t think this is off the table, despite the continuity arguments). Then I’d have to conclude that any degree of horrible experience has its price. That doesn’t just seem absurd, it flies in the face of what ethics just is to me. Sufficiently intense suffering just seems morally serious in a way that nothing else is. If that doesn’t resonate with you, I’m stumped.
Well I think I grasp the force of the initial intuition. I just abandon it upon reflection. I have a strong intuition that extreme suffering is very very bad. I don’t have the intuition that it’s badness can’t be outweighed by anything else, regardless of what the other thing is.
Here’s the post I said I was writing, in my other comment.
Thanks. :) When I imagine moderate (not unbearable) pains versus moderate pleasures experienced by different people, my intuition is that creating a small number of new moderate pleasures that wouldn’t otherwise exist doesn’t outweigh a single moderate pain, but there’s probably a large enough number (maybe thousands?) of newly created moderate pleasures that outweighs a moderate pain. I guess that would imply weak NU using this particular thought experiment. (Other thought experiments may yield different conclusions.)
Why think pleasure and suffering are measurable on the same hedonistic scale? They use pretty different parts of the brain. People can make preference-based tradeoffs between anything, so the fact that they make tradeoffs between pleasure and suffering doesn’t clearly establish that there’s a single hedonistic scale.
For further related discussion, see some writing by Adam Shriver:
https://library.oapen.org/bitstream/handle/20.500.12657/50994/9783731511083.pdf?sequence=1#page=285
https://link.springer.com/article/10.1007/s13164-013-0171-2
Or just additivity/separability. One such view is rank-discounted utilitarianism:
Maximize ∑Ni=0riui, where the ui represent utilities of individual experiences or total life welfare and are sorted increasingly (non-decreasingly), ui≤ui+1, and 0<r<1. A strict negative version might assume ui≤0.
In this case, there are many thresholds, and they depend on others’ utilities and r.
For what it’s worth, I think such views have pretty counterintuitive implications, e.g. they reduce to ethical egoism under the possibility of solipsism, or they reduce to maximin in large worlds (without uncertainty). This might be avoidable in practice if you reject the independence of irrelevant alternatives and only consider those affected by your choices, because both arguments depend on there being a large “background” population. Or if you treat solipsism like moral uncertainty and don’t just take expected values right through it. Still, I don’t find either of these solutions very satisfying, and I prefer to strongly reject egoism. Maximin is not extremely objectionable to me, although I would prefer mostly continuous tradeoffs, including some tradeoffs between number and intensity.
Sorry, I’m having a lot of trouble understanding this view. Could you try to explain it simply in a non mathematical way. I have awful mathematical intuition.
For a given utility u, adding more individuals or experiences with u as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ri,0<r<1) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.
In mathematical terms, for any v≤u≤0 , there is some (finite) N large enough that
N∑i=0riv=v1−rN+11−r<u11−r=∞∑i=0riubecause the limit (or infimum) in N of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite N.
*countably
Okay, still a bit confused by it but the objections you’ve given still apply of it converging to egoism or maximin in large worlds. It also has a strange implication that the badness of a person’s suffering depends on background conditions about other people. Parfit had a reply to this called the Egyptology objection I believe, namely, that this makes the number of people who suffered in ancient Egypt relevant to current ethical considerations, which seems deeply counterintuitive. I’m sufficiently confused about the math that I can’t really comment on how it avoids the objection that I laid out, but if it has lexicality everywhere that seems especially counterintuitive—if I understand this every single type of suffering can’t be outweighed by large amounts of smaller amounts of suffering.
The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it’s worth, I don’t find this counterintuitive.
It seems intuitive to me at least for sufficiently distant welfare levels, although it’s a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn’t be weird to me at all.
Does your view accept lexicality for very similar welfare levels?
I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven’t found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.
Should the right-hand-side sum start at i=N+1 rather than i=0, because the utilities at level v occupy the i=0 to i=N slots?
Not in the specific example I’m thinking of, because I’m imaging either the u‘s happening or the v’s happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).