I don’t assign much credence to neutrality, because I think adding bad lives is in fact bad. I prefer the procreation asymmetry, which might be stated this way:
Additional lives never make things go better, all else equal, and additional bad lives make things go worse, all else equal.
Also, you can give up the independence of irrelevant alternatives instead of transitivity. This would mean that which of two options is better might depend on what alternatives there are available to you, i.e. the ranking of outcomes depends on available options. I actually find this a fairly intuitive way to avoid the repugnant conclusion.
A few papers taking this approach to the procreation asymmetry and which avoid the repugnant conclusion:
Thanks for this, I suspected you might make a helpful comment! The procreation asymmetry is my long lost love. It’s what I used to believe quite strongly but ultimately I started to doubt it for the same reasons that I’ve outlined in this post.
My intuition is that giving up IIA is only slightly less barmy than giving up transitivity, but thanks for the suggested reading. I certainly feel like my thinking on population ethics can evolve further and I don’t rule out reconnecting with the procreation asymmetry.
For what it’s worth my current view is that the repugnant conclusion may only seem repugnant because we tend to think of ‘a life barely worth living’ as a pretty drab existence. I actually think that such a life is much ‘better’ than we intuitively think. I have a hunch that various biases are contributing to us overvaluing the quality of our lives in comparison to the zero level, something that David Benatar has written about. My thinking on this is very nascent though and there’s always the very repugnant conclusion to contend with which keeps me somewhat uneasy with total utilitarianism.
I think giving up IIA seems more plausible if you allow that value might be essentially comparative, and not something you can just measure in a given universe in isolation. Arrow’s impossibility theorem can also be avoided by giving it up. And standard intuitions when facing the repugnant conclusion itself (and hence similar impossibility theorems) seem best captured by an argument incompatible with IIA, i.e. whether or not it’s permissible to add the extra people depends on whether or not the more equal distribution of low welfare is an option.
It seems like most consequentialists assume IIA without even making this explicit, and I have yet to see a good argument for IIA. At least with transitivity, there are Dutch books/money pump arguments to show that you can be exploited if you reject it. Maybe there was some decisive argument in the past that lead to consensus on IIA and no one talks about it anymore, except when they want to reject it?
Another option to avoid the very repugnant conclusion but not the repugnant conclusion is to give (weak or strong) lexical priority to very bad lives or intense suffering. Center for Reducing Suffering has a few articles on lexicality. I’ve written a bit about how lexicality could look mathematically here without effectively ignoring everything that isn’t lexically dominating, and there’s also rank-discounted utilitarianism: see point 2 in this comment, this thread, or papers on “rank-discounted utilitarianism”.
Thanks for all of this. I think IIA is just something that seems intuitive. For example it would seem silly to me for someone to choose jam over peanut butter but then, on finding out that honey mustard was also an option, think that they should have chosen peanut butter. My support of IIA doesn’t really go beyond this intuitive feeling and perhaps I should think about it more.
Thanks for the readings about lexicality and rank-discounted utilitarianism. I’ll check it out.
I think the appeal of IIA loses some of its grip when one realizes that a lot of our ordinary moral intuitions violate it. Pete Graham has a nice case showing this. Here’s a slightly simplified version:
Suppose you see two people drowning in a crocodile-infested lake. You have two options:
Option 1: Do nothing.
Option 2: Dive in and save the first person’s life, at the cost of one of your legs.
In this case, most have the intuition that both options are permissible — while it’s certainly praiseworthy to sacrifice your leg to save someone’s life, it’s not obligatory to do so. Now suppose we add a third option to the mix:
Option 3: Dive in and save both people’s lives, at the cost of one of your legs.
Once we add option 3, most have the intuition that only options 1 and 3 are permissible, and that option 2 is now impermissible, contra IIA.
Thinking back on this, rather than violating IIA, couldn’t this just mean your order is not complete? Option 3 > Option 2, but neither is comparable to Option 1.
Maybe this violates a (permissibility) choice function definition of IIA, but not an order-based definition?
I think if you are a pure consequentialist then it is just a fact of the matter that there is a goodness ordering of the three options, and IIA seems compelling again. Perhaps IIA potentially breaks down a bit when one strays from pure consequentialism, I’d like to think about that a bit more.
Yeah, for sure. There are definitely plausible views (like pure consequentialism) that will reject these moral judgments and hold on to IIA.
But just to get clear on the dialectic, I wasn’t taking the salient question to be whether holding on to IIA is tenable. (Since there are plausible views that entail it, I think we can both agree it is!)
Rather, I was taking the salient question to be whether conflicting with IIA is itself a mark against a theory. And I take Pete’s example to tell against this thought, since upon reflection it seems like our ordinary moral judgments violate the IIA. And so, upon reflection, IIA is something we would need to be argued into accepting, not something that we should assume is true by default.
Taking a step back: on one way of looking at your initial post against person-affecting views, you can see the argument as boiling down to the fact that person-affecting views violate IIA. (I take this to be the thrust of Michael’s comment, above.) But if violating IIA isn’t a mark against a theory, then it’s not clear that this is a bad thing. (There might be plenty of other bad things about such views, of course, like the fact that they yield implausible verdicts in cases X, Y and Z. But if so, those would be the reasons for rejecting the view, not the fact that it violates IIA.)
I think IIA is more intuitive when you’re considering only the personal (self-regarding) preferences of a single individual like in your example, but even if IIA holds for each individual in a group, it need not for the group, especially when different people would exist, because these situations involve different interests. I think this is also plausibly true for all accounts of welfare or interests (maybe suitably modified), even hedonistic, since if someone never exists, they don’t have welfare or interests at all, which need not mean the same thing as welfare level 0.
If you find the (very) repugnant conclusion counterintuitive, this might be a sign that you’re stretching your intuition from this simple case too far.
I don’t assign much credence to neutrality, because I think adding bad lives is in fact bad. I prefer the procreation asymmetry, which might be stated this way:
Also, you can give up the independence of irrelevant alternatives instead of transitivity. This would mean that which of two options is better might depend on what alternatives there are available to you, i.e. the ranking of outcomes depends on available options. I actually find this a fairly intuitive way to avoid the repugnant conclusion.
A few papers taking this approach to the procreation asymmetry and which avoid the repugnant conclusion:
‘Making People Happy, Not Making Happy People’: A Defense of the Asymmetry Intuition in Population Ethics by Johann Frick
Person-affecting views and saturating counterpart relations by Christopher Meacham
The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas
I also have a few short arguments for asymmetry here and here in my shortform.
Hey Michael,
Thanks for this, I suspected you might make a helpful comment! The procreation asymmetry is my long lost love. It’s what I used to believe quite strongly but ultimately I started to doubt it for the same reasons that I’ve outlined in this post.
My intuition is that giving up IIA is only slightly less barmy than giving up transitivity, but thanks for the suggested reading. I certainly feel like my thinking on population ethics can evolve further and I don’t rule out reconnecting with the procreation asymmetry.
For what it’s worth my current view is that the repugnant conclusion may only seem repugnant because we tend to think of ‘a life barely worth living’ as a pretty drab existence. I actually think that such a life is much ‘better’ than we intuitively think. I have a hunch that various biases are contributing to us overvaluing the quality of our lives in comparison to the zero level, something that David Benatar has written about. My thinking on this is very nascent though and there’s always the very repugnant conclusion to contend with which keeps me somewhat uneasy with total utilitarianism.
I think giving up IIA seems more plausible if you allow that value might be essentially comparative, and not something you can just measure in a given universe in isolation. Arrow’s impossibility theorem can also be avoided by giving it up. And standard intuitions when facing the repugnant conclusion itself (and hence similar impossibility theorems) seem best captured by an argument incompatible with IIA, i.e. whether or not it’s permissible to add the extra people depends on whether or not the more equal distribution of low welfare is an option.
It seems like most consequentialists assume IIA without even making this explicit, and I have yet to see a good argument for IIA. At least with transitivity, there are Dutch books/money pump arguments to show that you can be exploited if you reject it. Maybe there was some decisive argument in the past that lead to consensus on IIA and no one talks about it anymore, except when they want to reject it?
Another option to avoid the very repugnant conclusion but not the repugnant conclusion is to give (weak or strong) lexical priority to very bad lives or intense suffering. Center for Reducing Suffering has a few articles on lexicality. I’ve written a bit about how lexicality could look mathematically here without effectively ignoring everything that isn’t lexically dominating, and there’s also rank-discounted utilitarianism: see point 2 in this comment, this thread, or papers on “rank-discounted utilitarianism”.
Thanks for all of this. I think IIA is just something that seems intuitive. For example it would seem silly to me for someone to choose jam over peanut butter but then, on finding out that honey mustard was also an option, think that they should have chosen peanut butter. My support of IIA doesn’t really go beyond this intuitive feeling and perhaps I should think about it more.
Thanks for the readings about lexicality and rank-discounted utilitarianism. I’ll check it out.
I think the appeal of IIA loses some of its grip when one realizes that a lot of our ordinary moral intuitions violate it. Pete Graham has a nice case showing this. Here’s a slightly simplified version:
Suppose you see two people drowning in a crocodile-infested lake. You have two options:
Option 1: Do nothing.
Option 2: Dive in and save the first person’s life, at the cost of one of your legs.
In this case, most have the intuition that both options are permissible — while it’s certainly praiseworthy to sacrifice your leg to save someone’s life, it’s not obligatory to do so. Now suppose we add a third option to the mix:
Option 3: Dive in and save both people’s lives, at the cost of one of your legs.
Once we add option 3, most have the intuition that only options 1 and 3 are permissible, and that option 2 is now impermissible, contra IIA.
Thinking back on this, rather than violating IIA, couldn’t this just mean your order is not complete? Option 3 > Option 2, but neither is comparable to Option 1.
Maybe this violates a (permissibility) choice function definition of IIA, but not an order-based definition?
Thanks, this is an interesting example!
I think if you are a pure consequentialist then it is just a fact of the matter that there is a goodness ordering of the three options, and IIA seems compelling again. Perhaps IIA potentially breaks down a bit when one strays from pure consequentialism, I’d like to think about that a bit more.
Yeah, for sure. There are definitely plausible views (like pure consequentialism) that will reject these moral judgments and hold on to IIA.
But just to get clear on the dialectic, I wasn’t taking the salient question to be whether holding on to IIA is tenable. (Since there are plausible views that entail it, I think we can both agree it is!)
Rather, I was taking the salient question to be whether conflicting with IIA is itself a mark against a theory. And I take Pete’s example to tell against this thought, since upon reflection it seems like our ordinary moral judgments violate the IIA. And so, upon reflection, IIA is something we would need to be argued into accepting, not something that we should assume is true by default.
Taking a step back: on one way of looking at your initial post against person-affecting views, you can see the argument as boiling down to the fact that person-affecting views violate IIA. (I take this to be the thrust of Michael’s comment, above.) But if violating IIA isn’t a mark against a theory, then it’s not clear that this is a bad thing. (There might be plenty of other bad things about such views, of course, like the fact that they yield implausible verdicts in cases X, Y and Z. But if so, those would be the reasons for rejecting the view, not the fact that it violates IIA.)
I think IIA is more intuitive when you’re considering only the personal (self-regarding) preferences of a single individual like in your example, but even if IIA holds for each individual in a group, it need not for the group, especially when different people would exist, because these situations involve different interests. I think this is also plausibly true for all accounts of welfare or interests (maybe suitably modified), even hedonistic, since if someone never exists, they don’t have welfare or interests at all, which need not mean the same thing as welfare level 0.
If you find the (very) repugnant conclusion counterintuitive, this might be a sign that you’re stretching your intuition from this simple case too far.