I think it is generally worth seeing population ethics scenarios (like the repugnant conclusion) as being intuition pumps of some principle or another. The core engine of the repugnant conclusion is (roughly) the counter-intuitive implications of how a lot of small things can outweigh a large thing. Thus a huge multitude of ‘slightly better than not’ lives can outweigh a few very blissful ones (or, turning the screws as Arrhenius does, for any number of blissful lives, there some—vastly larger—number of ‘slightly better than not’ lives for which it would be worth making these lives terrible for.)
Yet denying lives can ever go better than neutral (counter-intuitive to most—my life isn’t maximally good, but I think it is pretty great and better than nothing) may evade the repugnant conclusion, but doesn’t avoid this core engine of ‘lots of small things can outweigh a big thing’. Among a given (pre-existing, so possessing actual interests, not that this matters much) population, it can be worth torturing a few of these to avert sufficiently many pin-pricks/minor thwarted preferences to the rest.
I also think negative leaning views (especially with stronger ‘you can’t do better than nothing’ ones as suggested here) generally fare worse with population ethics paradoxes, as we can construct examples which not just share the core engine driving things like the repugnant conclusion, but are amplified further by adding counter-intuitive aspects of the negative view in question.
E.g. (and owed to Carl Shulman): suppose A is a vast population (say Tree(9), whatever) of people who are much happier than we are now, and live lives of almost-perfect preference satisfaction, but for a single mild thwarted preference (say they have to wait in a queue bored for an hour before they get into heaven). Now suppose B is a vast (but vastly smaller, say merely 10^100) population living profoundly awful lives. The view outlined in the OP above seems to recommend B over A (as a lot of small thwarted preferences among those in B can trade off each awful life in B), and generally that that any number of horrendous lives can be outweighed if you can abolish a slightly imperfect utopia of sufficient size, which seems to go (wildly!) wrong both in the determination and the direction (as A gets larger and larger, B becomes a better and better alternative).
The core engine of the repugnant conclusion is (roughly) the counter-intuitive implications of how a lot of small things can outweigh a large thing.
I disagree that this is the core engine. I know lots of people who find the repugnant conclusion untenable, while they readily bite the bullet in “dust specks vs. torture”.
I think the part that’s the most unacceptable about the repugnant conclusion is that you go from an initial paradise where all the people who exist are perfectly satisfied (in terms of both life goals and hedonics) to a state where there’s suffering and preference dissatisfaction. A lot of people have the intuition that creating new happy people is not in itself important. That’s what the repugnant conclusion runs against.
I think the part that’s the most unacceptable about the repugnant conclusion is that you go from an initial paradise where all the people who exist are perfectly satisfied (in terms of both life goals and hedonics) to a state where there’s suffering and preference dissatisfaction.
I hesitate to exegete intuitions, but I’m not convinced this is the story for most. Parfit’s initial statement of the RP didn’t stipulate the initial population were ‘perfectly satisfied’ but that they ‘merely’ had a “very high quality of life” (cf.). Moreover, I don’t think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.
I agree there’s some sort intuition that ‘very good’ should be qualitatively better than ‘barely better than nothing’, so one wants to resist being nickel-and-dimed into the latter (cf. critical level util, etc.). I also agree there’s person-affecting intuitions (although there’s natural moves like making the addition of A+ also increase the welfare of those originally in A, etc.)
Okay, I agree that going “from perfect to flawed” isn’t the core of the intuition.
Moreover, I don’t think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.
This seems correct to me too.
I mostly wanted to point out that I’m pretty sure that it’s a strawman that the repugnant conclusion primarily targets anti-aggregationist intuitions. I suspect that people would also find the conclusion strange if it involved smaller numbers. When a family decides how many kids they have and they estimate that the average quality of life per person in the family (esp. with a lot of weights on the parents themselves) will be highest if they have two children, most people would find it strange to go for five children if that did best in terms of total welfare.
For what it’s worth, that example is a special case of the Sadistic Conclusion (perhaps the Very Sadistic Conclusion?), which I do mention towards the end of the section “Other theoretical implications”. Given the impossibility theorems, like the one I cite there, claiming negative leaning views generally fare worse with population ethics paradoxes is a judgment call. I have the opposite judgment.
There’s a more repugnant version of the Repugnant Conclusion called the Very Repugnant Conclusion, in which your population A would be worse than a population with just the very bad lives in B, plus a much larger number of lives barely worth living, but still worth living, because their total value can make up for the harms in B and the loss of the value in A. If we’ve rejected the claim that these lives barely worth living do make the outcome better (by accepting the asymmetry or the more general claims I make and from which it follows) or can compensate for the harm in these bad lives, then the judgment from the Very Repugnant Conclusion would look as bad.
Furthermore, if you’re holding to the intuition that A doesn’t get worse as more people are added, then you couldn’t demonstrate the Sadistic Conclusion with your argument in the first place, so while the determination might clash with intuition (a valid response), it seems a bit question-begging to add that it goes wrong in “the direction (as A gets larger and larger, B becomes a better and better alternative).”
However, more importantly, this understanding of wellbeing conflicts with how we normally think about interests (or normative standards, according to Frick), as in Only Actual Interests and No Transfer (in my reply to Paul Christiano): if those lives never had any interest in pleasure and never experienced it, this would be no worse. Why should pleasure be treated so differently from other interests? So, the example would be the same as a large number of lives, each with a single mild thwarted preference (bad), and no other preferences (nothing to make up for the badness of the thwarted preference).
If you represent the value in lives as real numbers, you can reject either Independence/Separability (that what’s better or worse should not depend on the existence and the wellbeing of individuals that are held equal) or Continuity to avoid this problem. How this works for Continuity is more obvious, but for Independence/Separability, see Aggregating Harms — Should We Kill to Avoid Headaches? by Erik Carlson and his example Moderate Trade-off Theory. Basically, you can maximize the following social welfare function, for some fixed r,0<r<1 , with the utilities sorted in increasing (nondecreasing) order, u1≤u2≤⋯≤un (and, with the views I outline here, all of these values would never be positive):
n∑i=1ri−1ui
Note that this doesn’t actually avoid the Sadistic Conclusion if we do allow positive utilities, because adding positive utilities close to 0 can decrease the weight given to higher already existing positive utilities in such a way as to make the sum decrease. But it does avoid the version of the Sadistic Conclusion you give if we’re considering adding a very large number of very positive lives vs a smaller number of negative (or very negative) lives to a population which has lives that are much better than the very positive ones we might add. If there is no population you’re adding to, then a population of just negative lives is always worse than one with just positive lives.
For what it’s worth, that example is a special case of the Sadistic Conclusion
It isn’t (at least not as Arrhenius defines it). Further, the view you are proposing (and which my example was addressed to) can never endorse a sadistic conclusion in any case. If lives can only range between more or less bad (i.e. fewer or more unsatisfied preferences, but the amount/proportion of satisfied preferences has no moral bearing), the theory is never in a position to recommend adding ‘negative welfare’ lives over ‘positive welfare’ ones, as it denies one can ever add ‘positive welfare’ lives.
Although we might commonsensically say people in A, or A+ in the repugnant conclusion (or ‘A’ in my example) have positive welfare, your view urges us that this is mistaken, and we should take them to be ‘-something relatively small’ versus tormented lives which are ‘- a lot’: it would still be better for those in any of the ‘A cases’ had they not come into existence at all.
Where we put the ‘zero level’ doesn’t affect the engine of the repugnant conclusion I identify: if we can ‘add up’ lots of small positive increments (whether we are above or below the zero level), this can outweigh a smaller number of much larger negative shifts. In the (very/) repugnant conclusion, a vast multitude of ‘slightly better than nothing’ lives can outweigh very large negative shifts to a smaller population (either to slightly better than nothing, or, in the very repugnant case, to something much worse). In mine, avoiding a vast multitude of ‘slightly worse than nothing’ lives can be worth making a smaller group have ‘much worse than nothing’ lives.
As you say, you can drop separability, continuity (etc.) to avoid the conclusion of my example, but these are resources available for (say) a classical utilitarian to adopt to avoid the (very/) repugnant conclusion too (naturally, these options also bear substantial costs). In other words, I’m claiming that although this axiology avoids the (v/) repugnant conclusion, if it accepts continuity etc. it makes similarly counter-intuitive recommendations, and if it rejects them it faces parallel challenges to a theory which accepts positive utility lives which does the same.
Why I say it fares ‘even worse’ is that most intuit ‘an hour of boredom and (say) a millenia of a wonderfully happy life’ is much better, and not slightly worse, than nothing at all. Thus although it seems costly (for parallel reasons for the repugnant conclusion) to accept any number of tormented lives could be preferable than some vastly larger number of lives that (e.g.) pop into existence to briefly experience mild discomfort/preference dissatisfaction before ceasing to exist again, it seems even worse that the theory to be indifferent to that each of these lives are now long ones which, apart from this moment of brief preference dissatisfaction experience unalloyed joy/preference fulfilment, etc.
Why I say it fares ‘even worse’ is that most intuit ‘an hour of boredom and (say) a millenia of a wonderfully happy life’ is much better, and not slightly worse, than nothing at all.
Most also intuit that the (Very) Repugnant Conclusion is wrong, and probably that people are not mere vessels or receptacles for value (which isn’t avoided by classical utilitarians by giving up continuity or independence/separability), too. Why is the objection you raise stronger? There are various objections to all theories of population ethics; claiming some are worse than others is a personal judgment call, and you seem to be denying the possibility that many will find the objections to other views even more compelling without argument.
I claim we can do better than simply noting ‘all theories have intuitive costs, so which poison you pick is a judgement call’. In particular, I’m claiming that the ‘only thwarted preferences count’ poses extra intuitive costs: that for any intuitive population ethics counter-example C we can confront a ‘symmetric’ theory with, we can dissect the underlying engine that drives the intuitive cost, find it is orthogonal to the ‘only thwarted preferences count’ disagreement, and thus construct a parallel C* to the ‘only thwarted preferences count’ view which uses the same engine and is similarly counterintuitive, and often a C** which is even more counter-intuitive as it turns the screws to exploit the facial counter-intuitiveness of ‘only thwarted preferences count’ view. I.e.
Alice: Only counting thwarted preferences looks counter-intuitive (e.g. we generally take very happy lives as ‘better than nothing’, etc.) classical utilitarianism looks better.
Bob: Fair enough, these things look counter-intuitive, but theories are counter-intuitive. Classical utilitarianism leads to the very repugnant conclusion (C) in population ethics, after all, whilst mine does not.
Alice: Not so fast. Your view avoids the very repugnant conclusion, but if you share the same commitments re. continuity etc., these lead your view to imply the similarly repugnant conclusion (and motivated by factors shared between our views) that any n lives tormented are preferable to some much larger m of lives which suffer some mild dissatisfaction (C*).
Furthermore, your view is indifferent to how (commonsensically) happy the m people are, so (for example) 10^100 tormented lives are better than TREE(9) lives which are perfectly blissful, but for a 1 in TREE(3) chance [to emphasise, this chance is much smaller than P(0.0 …[write a zero on every plank length in the observable universe]...1)] of suffering an hour of boredom once in their life. (C**)
Bob can adapt his account to avoid this conclusion (e.g. dropping continuity), but Alice can adapt her account in a parallel fashion to avoid the very repugnant conclusion too. Similarly, ‘value receptacle’-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between ‘only thwarted preferences count’ and ‘satisfied preferences count too’ in particular.
I don’t think the cases between asymmetric and symmetric views will necessarily turn out to be so … symmetric (:P), since, to start, they each have different requirements to satisfy to earn the names asymmetric and symmetric, and how bad a conclusion will look can depend on whether we’re dealing with negative or positive utilities or both. To be called symmetric, it should still satisfy Mere Addition, right?
Dropping continuity looks bad for everyone, in my view, so I won’t argue further on that one.
However, what are the most plausible symmetric theories which avoid the Very Repugnant Conclusion and are still continuous? To be symmetric, it should still accept Mere Addition, right? Arrhenius has an impossibility theorem for the VRC. It seems to me the only plausible option is to give up General Non-Extreme Priority. Does such a symmetric theory exist, without also violating Non-Elitism (like Sider’s Geometrism does)?
EDIT: I think I’ve thought of such a social welfare function. Do Geometrism or Moderate Trade-off Theory for the negative utilities (or whatever an asymmetric view might have done to prioritize the worst off), and then add the term σ(∑imax{0,ui}) for the rest, where σ is strictly continuous, increasing and bounded above.
Similarly, ‘value receptacle’-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between ‘only thwarted preferences count’ and ‘satisfied preferences count too’ in particular.
Why are value receptacle objections stronger for preferences vs hedonism than for thwarted only vs satisfied too?
If it’s sometimes better to create new individuals than to help existing ones, then we are, at least in part, reduced to receptacles, because creating value by creating individuals instead of helping individuals puts value before individuals. It should matter that you have your preferences satisfied because you matter, but as value receptacles, it seems we’re just saying that it matters that there are more satisfied preferences. You might object that I’m saying that it matters that there are fewer satisfied preferences, but this is a consequence, not where I’m starting from; I start by rejecting the treatment of interest holders as value receptacles, through Only Actual Interests (and No Transfer).
Is it good to give someone a new preference just so that it can be satisfied, even at the cost of the preferences they would have had otherwise? How is convincing someone to really want a hotdog and then giving them one doing them a service if they had no desire for one in the first place (and it would satisfy no other interests of theirs)? Is it better for them even in the case where they don’t sacrifice other interests? Rather than doing what people want or we think they would want anyway, we would make them want things and do those for them instead. If preference satisfaction always counts in itself, then we’re paternalists. If it doesn’t always count but sometimes does, then we should look for other reasons, which is exactly what Only Actual Interests claims.
Of course, there’s the symmetric question: does preference thwarting (to whatever degree) always count against the existence of those preferences, and if it doesn’t, should we look for other reasons, too? I don’t find either answer implausible. For example, is a child worse off for having big but unrealistic dreams? I don’t think so, necessarily, but we might be able to explain this by referring to their other interests: dreaming big promotes optimism and wellbeing and prevents boredom, preventing the thwarting of more important interests. When we imagine the child dreaming vs not dreaming, we have not made all else equal. Could the same be true of not quite fully satisfied interests? I don’t rule out the possibility that the existence and satisfaction of some interests can promote the satisfaction of other interests. But if, they don’t get anything else out of their unsatisfied preferences, it’s not implausible that this would actually be worse, as a rule, if we have reasonable explanations for when it wouldn’t be worse.
I think it is generally worth seeing population ethics scenarios (like the repugnant conclusion) as being intuition pumps of some principle or another. The core engine of the repugnant conclusion is (roughly) the counter-intuitive implications of how a lot of small things can outweigh a large thing. Thus a huge multitude of ‘slightly better than not’ lives can outweigh a few very blissful ones (or, turning the screws as Arrhenius does, for any number of blissful lives, there some—vastly larger—number of ‘slightly better than not’ lives for which it would be worth making these lives terrible for.)
Yet denying lives can ever go better than neutral (counter-intuitive to most—my life isn’t maximally good, but I think it is pretty great and better than nothing) may evade the repugnant conclusion, but doesn’t avoid this core engine of ‘lots of small things can outweigh a big thing’. Among a given (pre-existing, so possessing actual interests, not that this matters much) population, it can be worth torturing a few of these to avert sufficiently many pin-pricks/minor thwarted preferences to the rest.
I also think negative leaning views (especially with stronger ‘you can’t do better than nothing’ ones as suggested here) generally fare worse with population ethics paradoxes, as we can construct examples which not just share the core engine driving things like the repugnant conclusion, but are amplified further by adding counter-intuitive aspects of the negative view in question.
E.g. (and owed to Carl Shulman): suppose A is a vast population (say Tree(9), whatever) of people who are much happier than we are now, and live lives of almost-perfect preference satisfaction, but for a single mild thwarted preference (say they have to wait in a queue bored for an hour before they get into heaven). Now suppose B is a vast (but vastly smaller, say merely 10^100) population living profoundly awful lives. The view outlined in the OP above seems to recommend B over A (as a lot of small thwarted preferences among those in B can trade off each awful life in B), and generally that that any number of horrendous lives can be outweighed if you can abolish a slightly imperfect utopia of sufficient size, which seems to go (wildly!) wrong both in the determination and the direction (as A gets larger and larger, B becomes a better and better alternative).
I disagree that this is the core engine. I know lots of people who find the repugnant conclusion untenable, while they readily bite the bullet in “dust specks vs. torture”.
I think the part that’s the most unacceptable about the repugnant conclusion is that you go from an initial paradise where all the people who exist are perfectly satisfied (in terms of both life goals and hedonics) to a state where there’s suffering and preference dissatisfaction. A lot of people have the intuition that creating new happy people is not in itself important. That’s what the repugnant conclusion runs against.
I hesitate to exegete intuitions, but I’m not convinced this is the story for most. Parfit’s initial statement of the RP didn’t stipulate the initial population were ‘perfectly satisfied’ but that they ‘merely’ had a “very high quality of life” (cf.). Moreover, I don’t think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.
I agree there’s some sort intuition that ‘very good’ should be qualitatively better than ‘barely better than nothing’, so one wants to resist being nickel-and-dimed into the latter (cf. critical level util, etc.). I also agree there’s person-affecting intuitions (although there’s natural moves like making the addition of A+ also increase the welfare of those originally in A, etc.)
Okay, I agree that going “from perfect to flawed” isn’t the core of the intuition.
This seems correct to me too.
I mostly wanted to point out that I’m pretty sure that it’s a strawman that the repugnant conclusion primarily targets anti-aggregationist intuitions. I suspect that people would also find the conclusion strange if it involved smaller numbers. When a family decides how many kids they have and they estimate that the average quality of life per person in the family (esp. with a lot of weights on the parents themselves) will be highest if they have two children, most people would find it strange to go for five children if that did best in terms of total welfare.
For what it’s worth, that example is a special case of the Sadistic Conclusion (perhaps the Very Sadistic Conclusion?), which I do mention towards the end of the section “Other theoretical implications”. Given the impossibility theorems, like the one I cite there, claiming negative leaning views generally fare worse with population ethics paradoxes is a judgment call. I have the opposite judgment.
There’s a more repugnant version of the Repugnant Conclusion called the Very Repugnant Conclusion, in which your population A would be worse than a population with just the very bad lives in B, plus a much larger number of lives barely worth living, but still worth living, because their total value can make up for the harms in B and the loss of the value in A. If we’ve rejected the claim that these lives barely worth living do make the outcome better (by accepting the asymmetry or the more general claims I make and from which it follows) or can compensate for the harm in these bad lives, then the judgment from the Very Repugnant Conclusion would look as bad.
Furthermore, if you’re holding to the intuition that A doesn’t get worse as more people are added, then you couldn’t demonstrate the Sadistic Conclusion with your argument in the first place, so while the determination might clash with intuition (a valid response), it seems a bit question-begging to add that it goes wrong in “the direction (as A gets larger and larger, B becomes a better and better alternative).”
However, more importantly, this understanding of wellbeing conflicts with how we normally think about interests (or normative standards, according to Frick), as in Only Actual Interests and No Transfer (in my reply to Paul Christiano): if those lives never had any interest in pleasure and never experienced it, this would be no worse. Why should pleasure be treated so differently from other interests? So, the example would be the same as a large number of lives, each with a single mild thwarted preference (bad), and no other preferences (nothing to make up for the badness of the thwarted preference).
If you represent the value in lives as real numbers, you can reject either Independence/Separability (that what’s better or worse should not depend on the existence and the wellbeing of individuals that are held equal) or Continuity to avoid this problem. How this works for Continuity is more obvious, but for Independence/Separability, see Aggregating Harms — Should We Kill to Avoid Headaches? by Erik Carlson and his example Moderate Trade-off Theory. Basically, you can maximize the following social welfare function, for some fixed r,0<r<1 , with the utilities sorted in increasing (nondecreasing) order, u1≤u2≤⋯≤un (and, with the views I outline here, all of these values would never be positive):
Note that this doesn’t actually avoid the Sadistic Conclusion if we do allow positive utilities, because adding positive utilities close to 0 can decrease the weight given to higher already existing positive utilities in such a way as to make the sum decrease. But it does avoid the version of the Sadistic Conclusion you give if we’re considering adding a very large number of very positive lives vs a smaller number of negative (or very negative) lives to a population which has lives that are much better than the very positive ones we might add. If there is no population you’re adding to, then a population of just negative lives is always worse than one with just positive lives.
(I’m not endorsing this function in particular.)
It isn’t (at least not as Arrhenius defines it). Further, the view you are proposing (and which my example was addressed to) can never endorse a sadistic conclusion in any case. If lives can only range between more or less bad (i.e. fewer or more unsatisfied preferences, but the amount/proportion of satisfied preferences has no moral bearing), the theory is never in a position to recommend adding ‘negative welfare’ lives over ‘positive welfare’ ones, as it denies one can ever add ‘positive welfare’ lives.
Although we might commonsensically say people in A, or A+ in the repugnant conclusion (or ‘A’ in my example) have positive welfare, your view urges us that this is mistaken, and we should take them to be ‘-something relatively small’ versus tormented lives which are ‘- a lot’: it would still be better for those in any of the ‘A cases’ had they not come into existence at all.
Where we put the ‘zero level’ doesn’t affect the engine of the repugnant conclusion I identify: if we can ‘add up’ lots of small positive increments (whether we are above or below the zero level), this can outweigh a smaller number of much larger negative shifts. In the (very/) repugnant conclusion, a vast multitude of ‘slightly better than nothing’ lives can outweigh very large negative shifts to a smaller population (either to slightly better than nothing, or, in the very repugnant case, to something much worse). In mine, avoiding a vast multitude of ‘slightly worse than nothing’ lives can be worth making a smaller group have ‘much worse than nothing’ lives.
As you say, you can drop separability, continuity (etc.) to avoid the conclusion of my example, but these are resources available for (say) a classical utilitarian to adopt to avoid the (very/) repugnant conclusion too (naturally, these options also bear substantial costs). In other words, I’m claiming that although this axiology avoids the (v/) repugnant conclusion, if it accepts continuity etc. it makes similarly counter-intuitive recommendations, and if it rejects them it faces parallel challenges to a theory which accepts positive utility lives which does the same.
Why I say it fares ‘even worse’ is that most intuit ‘an hour of boredom and (say) a millenia of a wonderfully happy life’ is much better, and not slightly worse, than nothing at all. Thus although it seems costly (for parallel reasons for the repugnant conclusion) to accept any number of tormented lives could be preferable than some vastly larger number of lives that (e.g.) pop into existence to briefly experience mild discomfort/preference dissatisfaction before ceasing to exist again, it seems even worse that the theory to be indifferent to that each of these lives are now long ones which, apart from this moment of brief preference dissatisfaction experience unalloyed joy/preference fulfilment, etc.
Ok.
Most also intuit that the (Very) Repugnant Conclusion is wrong, and probably that people are not mere vessels or receptacles for value (which isn’t avoided by classical utilitarians by giving up continuity or independence/separability), too. Why is the objection you raise stronger? There are various objections to all theories of population ethics; claiming some are worse than others is a personal judgment call, and you seem to be denying the possibility that many will find the objections to other views even more compelling without argument.
I claim we can do better than simply noting ‘all theories have intuitive costs, so which poison you pick is a judgement call’. In particular, I’m claiming that the ‘only thwarted preferences count’ poses extra intuitive costs: that for any intuitive population ethics counter-example C we can confront a ‘symmetric’ theory with, we can dissect the underlying engine that drives the intuitive cost, find it is orthogonal to the ‘only thwarted preferences count’ disagreement, and thus construct a parallel C* to the ‘only thwarted preferences count’ view which uses the same engine and is similarly counterintuitive, and often a C** which is even more counter-intuitive as it turns the screws to exploit the facial counter-intuitiveness of ‘only thwarted preferences count’ view. I.e.
Alice: Only counting thwarted preferences looks counter-intuitive (e.g. we generally take very happy lives as ‘better than nothing’, etc.) classical utilitarianism looks better.
Bob: Fair enough, these things look counter-intuitive, but theories are counter-intuitive. Classical utilitarianism leads to the very repugnant conclusion (C) in population ethics, after all, whilst mine does not.
Alice: Not so fast. Your view avoids the very repugnant conclusion, but if you share the same commitments re. continuity etc., these lead your view to imply the similarly repugnant conclusion (and motivated by factors shared between our views) that any n lives tormented are preferable to some much larger m of lives which suffer some mild dissatisfaction (C*).
Furthermore, your view is indifferent to how (commonsensically) happy the m people are, so (for example) 10^100 tormented lives are better than TREE(9) lives which are perfectly blissful, but for a 1 in TREE(3) chance [to emphasise, this chance is much smaller than P(0.0 …[write a zero on every plank length in the observable universe]...1)] of suffering an hour of boredom once in their life. (C**)
Bob can adapt his account to avoid this conclusion (e.g. dropping continuity), but Alice can adapt her account in a parallel fashion to avoid the very repugnant conclusion too. Similarly, ‘value receptacle’-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between ‘only thwarted preferences count’ and ‘satisfied preferences count too’ in particular.
I don’t think the cases between asymmetric and symmetric views will necessarily turn out to be so … symmetric (:P), since, to start, they each have different requirements to satisfy to earn the names asymmetric and symmetric, and how bad a conclusion will look can depend on whether we’re dealing with negative or positive utilities or both. To be called symmetric, it should still satisfy Mere Addition, right?
Dropping continuity looks bad for everyone, in my view, so I won’t argue further on that one.
However, what are the most plausible symmetric theories which avoid the Very Repugnant Conclusion and are still continuous? To be symmetric, it should still accept Mere Addition, right? Arrhenius has an impossibility theorem for the VRC. It seems to me the only plausible option is to give up General Non-Extreme Priority. Does such a symmetric theory exist, without also violating Non-Elitism (like Sider’s Geometrism does)?
EDIT: I think I’ve thought of such a social welfare function. Do Geometrism or Moderate Trade-off Theory for the negative utilities (or whatever an asymmetric view might have done to prioritize the worst off), and then add the term σ(∑imax{0,ui}) for the rest, where σ is strictly continuous, increasing and bounded above.
Why are value receptacle objections stronger for preferences vs hedonism than for thwarted only vs satisfied too?
If it’s sometimes better to create new individuals than to help existing ones, then we are, at least in part, reduced to receptacles, because creating value by creating individuals instead of helping individuals puts value before individuals. It should matter that you have your preferences satisfied because you matter, but as value receptacles, it seems we’re just saying that it matters that there are more satisfied preferences. You might object that I’m saying that it matters that there are fewer satisfied preferences, but this is a consequence, not where I’m starting from; I start by rejecting the treatment of interest holders as value receptacles, through Only Actual Interests (and No Transfer).
Is it good to give someone a new preference just so that it can be satisfied, even at the cost of the preferences they would have had otherwise? How is convincing someone to really want a hotdog and then giving them one doing them a service if they had no desire for one in the first place (and it would satisfy no other interests of theirs)? Is it better for them even in the case where they don’t sacrifice other interests? Rather than doing what people want or we think they would want anyway, we would make them want things and do those for them instead. If preference satisfaction always counts in itself, then we’re paternalists. If it doesn’t always count but sometimes does, then we should look for other reasons, which is exactly what Only Actual Interests claims.
Of course, there’s the symmetric question: does preference thwarting (to whatever degree) always count against the existence of those preferences, and if it doesn’t, should we look for other reasons, too? I don’t find either answer implausible. For example, is a child worse off for having big but unrealistic dreams? I don’t think so, necessarily, but we might be able to explain this by referring to their other interests: dreaming big promotes optimism and wellbeing and prevents boredom, preventing the thwarting of more important interests. When we imagine the child dreaming vs not dreaming, we have not made all else equal. Could the same be true of not quite fully satisfied interests? I don’t rule out the possibility that the existence and satisfaction of some interests can promote the satisfaction of other interests. But if, they don’t get anything else out of their unsatisfied preferences, it’s not implausible that this would actually be worse, as a rule, if we have reasonable explanations for when it wouldn’t be worse.