I think giving up IIA seems more plausible if you allow that value might be essentially comparative, and not something you can just measure in a given universe in isolation. Arrow’s impossibility theorem can also be avoided by giving it up. And standard intuitions when facing the repugnant conclusion itself (and hence similar impossibility theorems) seem best captured by an argument incompatible with IIA, i.e. whether or not it’s permissible to add the extra people depends on whether or not the more equal distribution of low welfare is an option.
It seems like most consequentialists assume IIA without even making this explicit, and I have yet to see a good argument for IIA. At least with transitivity, there are Dutch books/money pump arguments to show that you can be exploited if you reject it. Maybe there was some decisive argument in the past that lead to consensus on IIA and no one talks about it anymore, except when they want to reject it?
Another option to avoid the very repugnant conclusion but not the repugnant conclusion is to give (weak or strong) lexical priority to very bad lives or intense suffering. Center for Reducing Suffering has a few articles on lexicality. I’ve written a bit about how lexicality could look mathematically here without effectively ignoring everything that isn’t lexically dominating, and there’s also rank-discounted utilitarianism: see point 2 in this comment, this thread, or papers on “rank-discounted utilitarianism”.
Thanks for all of this. I think IIA is just something that seems intuitive. For example it would seem silly to me for someone to choose jam over peanut butter but then, on finding out that honey mustard was also an option, think that they should have chosen peanut butter. My support of IIA doesn’t really go beyond this intuitive feeling and perhaps I should think about it more.
Thanks for the readings about lexicality and rank-discounted utilitarianism. I’ll check it out.
I think the appeal of IIA loses some of its grip when one realizes that a lot of our ordinary moral intuitions violate it. Pete Graham has a nice case showing this. Here’s a slightly simplified version:
Suppose you see two people drowning in a crocodile-infested lake. You have two options:
Option 1: Do nothing.
Option 2: Dive in and save the first person’s life, at the cost of one of your legs.
In this case, most have the intuition that both options are permissible — while it’s certainly praiseworthy to sacrifice your leg to save someone’s life, it’s not obligatory to do so. Now suppose we add a third option to the mix:
Option 3: Dive in and save both people’s lives, at the cost of one of your legs.
Once we add option 3, most have the intuition that only options 1 and 3 are permissible, and that option 2 is now impermissible, contra IIA.
Thinking back on this, rather than violating IIA, couldn’t this just mean your order is not complete? Option 3 > Option 2, but neither is comparable to Option 1.
Maybe this violates a (permissibility) choice function definition of IIA, but not an order-based definition?
I think if you are a pure consequentialist then it is just a fact of the matter that there is a goodness ordering of the three options, and IIA seems compelling again. Perhaps IIA potentially breaks down a bit when one strays from pure consequentialism, I’d like to think about that a bit more.
Yeah, for sure. There are definitely plausible views (like pure consequentialism) that will reject these moral judgments and hold on to IIA.
But just to get clear on the dialectic, I wasn’t taking the salient question to be whether holding on to IIA is tenable. (Since there are plausible views that entail it, I think we can both agree it is!)
Rather, I was taking the salient question to be whether conflicting with IIA is itself a mark against a theory. And I take Pete’s example to tell against this thought, since upon reflection it seems like our ordinary moral judgments violate the IIA. And so, upon reflection, IIA is something we would need to be argued into accepting, not something that we should assume is true by default.
Taking a step back: on one way of looking at your initial post against person-affecting views, you can see the argument as boiling down to the fact that person-affecting views violate IIA. (I take this to be the thrust of Michael’s comment, above.) But if violating IIA isn’t a mark against a theory, then it’s not clear that this is a bad thing. (There might be plenty of other bad things about such views, of course, like the fact that they yield implausible verdicts in cases X, Y and Z. But if so, those would be the reasons for rejecting the view, not the fact that it violates IIA.)
I think IIA is more intuitive when you’re considering only the personal (self-regarding) preferences of a single individual like in your example, but even if IIA holds for each individual in a group, it need not for the group, especially when different people would exist, because these situations involve different interests. I think this is also plausibly true for all accounts of welfare or interests (maybe suitably modified), even hedonistic, since if someone never exists, they don’t have welfare or interests at all, which need not mean the same thing as welfare level 0.
If you find the (very) repugnant conclusion counterintuitive, this might be a sign that you’re stretching your intuition from this simple case too far.
I think giving up IIA seems more plausible if you allow that value might be essentially comparative, and not something you can just measure in a given universe in isolation. Arrow’s impossibility theorem can also be avoided by giving it up. And standard intuitions when facing the repugnant conclusion itself (and hence similar impossibility theorems) seem best captured by an argument incompatible with IIA, i.e. whether or not it’s permissible to add the extra people depends on whether or not the more equal distribution of low welfare is an option.
It seems like most consequentialists assume IIA without even making this explicit, and I have yet to see a good argument for IIA. At least with transitivity, there are Dutch books/money pump arguments to show that you can be exploited if you reject it. Maybe there was some decisive argument in the past that lead to consensus on IIA and no one talks about it anymore, except when they want to reject it?
Another option to avoid the very repugnant conclusion but not the repugnant conclusion is to give (weak or strong) lexical priority to very bad lives or intense suffering. Center for Reducing Suffering has a few articles on lexicality. I’ve written a bit about how lexicality could look mathematically here without effectively ignoring everything that isn’t lexically dominating, and there’s also rank-discounted utilitarianism: see point 2 in this comment, this thread, or papers on “rank-discounted utilitarianism”.
Thanks for all of this. I think IIA is just something that seems intuitive. For example it would seem silly to me for someone to choose jam over peanut butter but then, on finding out that honey mustard was also an option, think that they should have chosen peanut butter. My support of IIA doesn’t really go beyond this intuitive feeling and perhaps I should think about it more.
Thanks for the readings about lexicality and rank-discounted utilitarianism. I’ll check it out.
I think the appeal of IIA loses some of its grip when one realizes that a lot of our ordinary moral intuitions violate it. Pete Graham has a nice case showing this. Here’s a slightly simplified version:
Suppose you see two people drowning in a crocodile-infested lake. You have two options:
Option 1: Do nothing.
Option 2: Dive in and save the first person’s life, at the cost of one of your legs.
In this case, most have the intuition that both options are permissible — while it’s certainly praiseworthy to sacrifice your leg to save someone’s life, it’s not obligatory to do so. Now suppose we add a third option to the mix:
Option 3: Dive in and save both people’s lives, at the cost of one of your legs.
Once we add option 3, most have the intuition that only options 1 and 3 are permissible, and that option 2 is now impermissible, contra IIA.
Thinking back on this, rather than violating IIA, couldn’t this just mean your order is not complete? Option 3 > Option 2, but neither is comparable to Option 1.
Maybe this violates a (permissibility) choice function definition of IIA, but not an order-based definition?
Thanks, this is an interesting example!
I think if you are a pure consequentialist then it is just a fact of the matter that there is a goodness ordering of the three options, and IIA seems compelling again. Perhaps IIA potentially breaks down a bit when one strays from pure consequentialism, I’d like to think about that a bit more.
Yeah, for sure. There are definitely plausible views (like pure consequentialism) that will reject these moral judgments and hold on to IIA.
But just to get clear on the dialectic, I wasn’t taking the salient question to be whether holding on to IIA is tenable. (Since there are plausible views that entail it, I think we can both agree it is!)
Rather, I was taking the salient question to be whether conflicting with IIA is itself a mark against a theory. And I take Pete’s example to tell against this thought, since upon reflection it seems like our ordinary moral judgments violate the IIA. And so, upon reflection, IIA is something we would need to be argued into accepting, not something that we should assume is true by default.
Taking a step back: on one way of looking at your initial post against person-affecting views, you can see the argument as boiling down to the fact that person-affecting views violate IIA. (I take this to be the thrust of Michael’s comment, above.) But if violating IIA isn’t a mark against a theory, then it’s not clear that this is a bad thing. (There might be plenty of other bad things about such views, of course, like the fact that they yield implausible verdicts in cases X, Y and Z. But if so, those would be the reasons for rejecting the view, not the fact that it violates IIA.)
I think IIA is more intuitive when you’re considering only the personal (self-regarding) preferences of a single individual like in your example, but even if IIA holds for each individual in a group, it need not for the group, especially when different people would exist, because these situations involve different interests. I think this is also plausibly true for all accounts of welfare or interests (maybe suitably modified), even hedonistic, since if someone never exists, they don’t have welfare or interests at all, which need not mean the same thing as welfare level 0.
If you find the (very) repugnant conclusion counterintuitive, this might be a sign that you’re stretching your intuition from this simple case too far.