It seems the relevant question is whether your original argument for A goes through. I think you pretty much agree that ethics requires persons to be affected, right? Then we have to rule out switching to Z from the start: Z would be actively bad for the initial people in S, and not switching to Z would not be bad for the new people in Z, since they don’t exist.
Furthermore, it arguably isn’t unfair when people are created (A+) if the alternative (A) would have been not to create them in the first place.[1] So choosing A+ wouldn’t be unfair to anyone. A+ would only be unfair if we couldn’t rule out Z. And indeed, it seems in most cases we in fact can’t rule out Z with any degree of certainty for the future, since we don’t have a lot of evidence that “certain kinds of value lock-in” would ensure we stay with A+ for all eternity. So choosing A+ now would mean it is quite likely that we’d have to choose between (continuing) A+ and switching to Z in the future, and switching would be equivalent to fair redistribution, and required by ethics. But this path (S → A+ → Z) would be bad for the people in initial S, and not good for the additional people in S+/Z who at this point do not exist. So we, in S, should choose A.
In other words, if S is current, Z is bad, and A+ is good now (in fact currently a bit better than A), but choosing A+ would quite likely lead us on a path where we are morally forced to switch from A+ to Z in the future. Which would be bad from our current perspective (S). So we should play it safe and choose A now.
Once upon a time there was a group of fleas. They complained about the unfairness of their existence. “We all are so small, while those few dogs enjoy their enormous size! This is exceedingly unfair and therefore highly unethical. Size should have been distributed equally between fleas and dogs.” The dog, which they inhabited, heard them talking and replied: “If it weren’t for us dogs, you fleas wouldn’t exist in the first place. Your existence depended on our existence. We let you live in our fur. The alternative to your tiny nature would not being larger, but your non-existence. To be small is not less fair than to not be at all.”
If we were only concerned with what’s best for the original people when in S, the probability that, if we pick A+, we can and should switch to something like Z later matters. For the original people, it may be worth the risk. It would depend on the details.
I also suspect we should first rule out A+ with Z available from S, even if we were sure we couldn’t later switch to something like Z. A+ does seem unfair with Z available, from S. Whether or not we can switch to something like Z later, we’ll have realized it was a mistake to not choose Z over A+ for the people who will then exist, if we had chosen A+. But I also want to say it won’t have been a mistake to pick A, despite A+ having being available.
2 motivates applying impartial norms first, like fixed population comparisons insensitive to who currently or necessarily exists, to rule out options, and in this case, A+, because it’s worse than Z. After that, we pick among the remaining options using person-affecting principles, like necessitarianism, which gives us A over Z. That’s Dasgupta’s view.
we’ll have realized it was a mistake to not choose Z over A+ for the people who will then exist, if we had chosen A+.
Let’s replace A with A’ and A+ with A+‘. A’ has welfare level 4 instead of 100, and A+′ has, for the original people, welfare level 200 instead of 101 (for a total of 299). According to your argument we should still rule out A+′ because it’s less fair than Z. Even though the original people get 196 points more welfare in A+′ than in A’. So we end up with A’ and a welfare level of 4. That seems highly incompatible with ethics being about affecting persons.
Dasgupta’s view makes ethics about what seems unambiguously best first, and then about affecting persons second. It’s still person-affecting, but less so than necessitarianism and presentism.
It could be wrong about what’s unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+′ (and maybe A+) looks better than Z.
Do you think we should be indifferent in the nonidentity problem if we’re person-affecting? I.e. between creating a person a person with a great life and a different person with a marginally good life (and no other options).
For example, we shouldn’t care about the effects of climate change on future generations (maybe after a few generations ahead), because future people’s identities will be different if we act differently.
In the non-identity problem we have no alternative which doesn’t affect a person, since we don’t compare creating a person with not-creating it, but creating a person vs creating a different person. Not creating one isn’t an option. So we have non-present but necessary persons, or rather: a necessary number of additional persons. Then even person-affecting views should arguably say, if you create one anyway, then a great one is better than a marginally good one.
But in the case of comparing A+ and Z (or variants) the additional people can’t be treated as necessary because A is also an option.
Then, I think there are ways to interpret Dasgupta’s view as compatible with “ethics being about affecting persons”, step by step:
Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons.
Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons.
These other views also seem compatible with “ethics being about affecting persons”:
The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle.
Actualism
The procreation asymmetry
Anyway, I feel like we’re nitpicking here about what deserves the label “person-affecting” or “being about affecting persons”.
I wouldn’t agree on the first point, because making Desgupta’s step 1 the “step 1” is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.
Alternatively there is the regret argument, that we would “realize”, after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don’t tend to imagine A+ as locked in).
I agree though that the classification “person-affecting” alone probably doesn’t capture a lot of potential intricacies of various proposals.
We should separate whether the view is well-motivated from whether it’s compatible with “ethics being about affecting persons”. It’s based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with “ethics being about affecting persons”.
We should also separate plausibility from whether it would follow on stricter interpretations of “ethics being about affecting persons”. An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same arguments you make for A+ over Z, so I think your arguments prove too much. For example,
Alice with welfare level 10 and 1 million people with welfare level 1 each
Alice with welfare level 4 and 1 million different people with welfare level 4 each
You said “Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+.” The same argument would support 1 over 2.
Then you said “Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?).” Similarly, I could say “Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there’s a minimum number of contingent people across outcomes (… so what?)”
So, similar arguments support narrow person-affecting views over wide ones.
The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.
I think ignoring irrelevant alternatives has some independent appeal. Dasgupta’s view does that at step 1, but not at step 2. So, it doesn’t always ignore them, but it ignores them more than necessitarianism does.
I can further motivate Dasgupta’s view, or something similar:
There are some “more objective” facts about axiology or what we should do that don’t depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these “more objective” facts. Hence something like step 1. But these facts can leave a lot of options incomparable or undominated/permissible. I think all complete, transitive and independent of irrelevant alternatives (IIA) views are kind of implausible (e.g. the impossibility theorems of Arrhenius). Still, there are some things the most plausible of these views can agree on, including that Z>A+.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it’s only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax,geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That’s nearly a consensus in favour of Z>A+, and the dissidents have more plausible counterparts that support Z>A+.
On the other hand, there’s more disagreement on A vs A+, and on A vs Z.
Whether or not this step is person-affecting could depend on what kinds of views we use or the facts we’re constrained by, but I’m less worried about that than what I think are plausible (to me) requirements for axiology.
After being constrained by the “more objective” facts in step 1, we should (or are at least allowed to) pick between remaining permissible options in favour of necessary people (or minimizing harm or some other person-affecting principle). Other people wouldn’t have reasonable impartial grounds for complaint with our decisions, because we already addressed the “more objective” impartial facts in 1.
If you were going to defend utilitarian necessitarianism, i.e. maximize the total utility of necessary people, you’d need to justify the utilitarian bit. But the most plausible justifications for the utilitarian bit would end up being justifications for Z>A+, unless you restrict them apparently arbitrarily. So then, you ask: am I a necessitarian first, or a utilitarian first? If you’re utilitarian first, you end up with something like Dasgupta’s view. If you’re a necessitarian first, then you end up with utilitarian necessitarianism.
Similarly if you substitute a different wide, anonymous, monotonic, non-anti-egalitarian view for the utilitarian bit.
You said “Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+.” The same argument would support 1 over 2.
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can’t infer much from it.
Then you said “Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?).” Similarly, I could say “Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there’s a minimum number of contingent people across outcomes (… so what?)”
Well, there is a necessary number of “contingent people”, which seems similar to having necessary (identical) people. Since in both cases not creating anyone is not an option. Unlike in Huemer’s three choice case where A is an option.
I think ignoring irrelevant alternatives has some independent appeal.
I think there is a quite straightforward argument why IIA is false. The paradox arises because we seem to have a cycle of binary comparisons: A+ is better than A, Z is better than A+, A is better than Z. The issue here seems to be that this assumes we can just break down a three option comparison into three binary comparisons. Which is arguably false, since it can lead to cycles. And when we want to avoid cycles while keeping binary comparisons, we have to assume we do some of the binary choices “first” and thereby rule out one of the remaining ones, removing the cycle. So we need either a principled way of deciding on the “evaluation order” of the binary comparisons, or reject the assumption that “x compared to y” is necessarily the same as “x compared y, given z”. If the latter removes the cycle, that is.
Another case where IIA leads to an absurd result is preference aggregation. Assume three equally sized groups (1, 2, 3) have these individual preferences:
x≻y≻z
y≻z≻x
z≻x≻y
The obvious and obviously only correct aggregation would be x∼y∼z, i.e. indifference between the three options. Which is different from what would happen if you’d take out either one of three options and make it a binary choice, since each binary choice has a majority. So the “irrelevant” alternatives are not actually irrelevant, since they can determine a choice relevant global property like a cycle. So IIA is false, since it would lead to a cycle. This seems not unlike the cycle we get in the repugnant conclusion paradox, although there the solution is arguably not that all three options are equally good.
There are some “more objective” facts about axiology or what we should do that don’t depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these “more objective” facts. Hence something like step 1.
I don’t see why this would be better than doing other comparisons first. As I said, this is the strategy of solving three choices with binary comparisons, but in a particular order, so that we end up with two total comparisons instead of three, since we rule out one option early. The question is why doing this or that binary comparison first, rather than another one, would be better. If we insist on comparing A and Z first, we would obviously rule out Z first, so we end up only comparing A and A+, while the comparison A+ and Z is never made.
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can’t infer much from it.
I can add any number of other options, as long as they respect the premises of your argument and are “unfair” to the necessary number of contingent people. What specific added complexity matters here and why?
I think you’d want to adjust your argument, replacing “present” with something like “the minimum number of contingent people” (and decide how to match counterparts if there are different numbers of contingent people). But this is moving to a less strict interpretation of “ethics being about affecting persons”. And then I could make your original complaint here against Dasgupta’s approach against the less strict wide interpretation.
Well, there is a necessary number of “contingent people”, which seems similar to having necessary (identical) people.
But it’s not the same, and we can argue against it on a stricter interpretation. The difference seems significant, too: no specific contingent person is or would be made worse off. They’d have no grounds for complaint. If you can’t tell me for whom the outcome is worse, why should I care? (And then I can just deny each reason you give as not in line with my intuitions, e.g. ”… so what?”)
Stepping back, I’m not saying that wide views are wrong. I’m sympathetic to them. I also have some sympathy for (asymmetric) narrow views for roughly the reasons I just gave. My point is that your argument or the way you argued could prove too much if taken to be a very strong argument. You criticize Dasgupta’s view from a stricter interpretation, but we can also criticize wide views from a stricter interpretation.
I could also criticize presentism, necessitarianism and wide necessitarianism for being insensitive to the differences between A+ and Z for persons affected. The choice between A, A+ and Z is not just a choice between A and A+ or between A and Z. Between A+ and Z, the “extra” persons exist in both and are affected, even if A is available.
I think there is a quite straightforward argument why IIA is false. (...)
I think these are okay arguments, but IIA still has independent appeal, and here you need a specific argument for why Z vs A+ depends on the availability of A. If the argument is that we should do what’s best for necessary people (or necessary people + necessary number of contingents and resolving how to match counterparts), where the latter is defined relative to the set of available options, including “irrelevant options”, then you’re close to assuming IIA is false, rather than defending it. Why should we define that relative to the option set?
And there are also other resolutions compatible with IIA. We can revise our intuitions about some of the binary choices, possibly to incomparability, which is what Dasgupta’s view does in the first step.
I don’t see why this would be better than doing other comparisons first.
It is constrained by “more objective” impartial facts. Going straight for necessitarianism first seems too partial, and unfair in other ways (prioritarian, egalitarian, most plausible impartial standards). If you totally ignore the differences in welfare for the extra people between A+ and Z (not just outweighed, but taken to be irrelevant) when A is available, it seems you’re being infinitely partial to the necessary people.[2] Impartiality is somewhat more important to me than my person-affecting intuitions here.
I’m not saying this is a decisive argument or that there is any, but it’s one that appeals to my intuitions. If your person-affecting intuitions are more important or you don’t find necessitarianism or whatever objectionably partial, then you could be more inclined to compare another way.
We’d still have to make choices in practice, though, and a systematic procedure would violate a choice-based version of IIA (whichever we choose in the 3-option case of A, A+, Z would not be chosen in binary choice with one of the available options).
It seems the relevant question is whether your original argument for A goes through. I think you pretty much agree that ethics requires persons to be affected, right? Then we have to rule out switching to Z from the start: Z would be actively bad for the initial people in S, and not switching to Z would not be bad for the new people in Z, since they don’t exist.
Furthermore, it arguably isn’t unfair when people are created (A+) if the alternative (A) would have been not to create them in the first place.[1] So choosing A+ wouldn’t be unfair to anyone. A+ would only be unfair if we couldn’t rule out Z. And indeed, it seems in most cases we in fact can’t rule out Z with any degree of certainty for the future, since we don’t have a lot of evidence that “certain kinds of value lock-in” would ensure we stay with A+ for all eternity. So choosing A+ now would mean it is quite likely that we’d have to choose between (continuing) A+ and switching to Z in the future, and switching would be equivalent to fair redistribution, and required by ethics. But this path (S → A+ → Z) would be bad for the people in initial S, and not good for the additional people in S+/Z who at this point do not exist. So we, in S, should choose A.
In other words, if S is current, Z is bad, and A+ is good now (in fact currently a bit better than A), but choosing A+ would quite likely lead us on a path where we are morally forced to switch from A+ to Z in the future. Which would be bad from our current perspective (S). So we should play it safe and choose A now.
Once upon a time there was a group of fleas. They complained about the unfairness of their existence. “We all are so small, while those few dogs enjoy their enormous size! This is exceedingly unfair and therefore highly unethical. Size should have been distributed equally between fleas and dogs.” The dog, which they inhabited, heard them talking and replied: “If it weren’t for us dogs, you fleas wouldn’t exist in the first place. Your existence depended on our existence. We let you live in our fur. The alternative to your tiny nature would not being larger, but your non-existence. To be small is not less fair than to not be at all.”
I largely agree with this, but
If we were only concerned with what’s best for the original people when in S, the probability that, if we pick A+, we can and should switch to something like Z later matters. For the original people, it may be worth the risk. It would depend on the details.
I also suspect we should first rule out A+ with Z available from S, even if we were sure we couldn’t later switch to something like Z. A+ does seem unfair with Z available, from S. Whether or not we can switch to something like Z later, we’ll have realized it was a mistake to not choose Z over A+ for the people who will then exist, if we had chosen A+. But I also want to say it won’t have been a mistake to pick A, despite A+ having being available.
2 motivates applying impartial norms first, like fixed population comparisons insensitive to who currently or necessarily exists, to rule out options, and in this case, A+, because it’s worse than Z. After that, we pick among the remaining options using person-affecting principles, like necessitarianism, which gives us A over Z. That’s Dasgupta’s view.
Let’s replace A with A’ and A+ with A+‘. A’ has welfare level 4 instead of 100, and A+′ has, for the original people, welfare level 200 instead of 101 (for a total of 299). According to your argument we should still rule out A+′ because it’s less fair than Z. Even though the original people get 196 points more welfare in A+′ than in A’. So we end up with A’ and a welfare level of 4. That seems highly incompatible with ethics being about affecting persons.
Dasgupta’s view makes ethics about what seems unambiguously best first, and then about affecting persons second. It’s still person-affecting, but less so than necessitarianism and presentism.
It could be wrong about what’s unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+′ (and maybe A+) looks better than Z.
Do you think we should be indifferent in the nonidentity problem if we’re person-affecting? I.e. between creating a person a person with a great life and a different person with a marginally good life (and no other options).
For example, we shouldn’t care about the effects of climate change on future generations (maybe after a few generations ahead), because future people’s identities will be different if we act differently.
But then also see the last section of the post.
In the non-identity problem we have no alternative which doesn’t affect a person, since we don’t compare creating a person with not-creating it, but creating a person vs creating a different person. Not creating one isn’t an option. So we have non-present but necessary persons, or rather: a necessary number of additional persons. Then even person-affecting views should arguably say, if you create one anyway, then a great one is better than a marginally good one.
But in the case of comparing A+ and Z (or variants) the additional people can’t be treated as necessary because A is also an option.
Then, I think there are ways to interpret Dasgupta’s view as compatible with “ethics being about affecting persons”, step by step:
Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons.
Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons.
These other views also seem compatible with “ethics being about affecting persons”:
The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle.
Actualism
The procreation asymmetry
Anyway, I feel like we’re nitpicking here about what deserves the label “person-affecting” or “being about affecting persons”.
I wouldn’t agree on the first point, because making Desgupta’s step 1 the “step 1” is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.
Alternatively there is the regret argument, that we would “realize”, after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don’t tend to imagine A+ as locked in).
I agree though that the classification “person-affecting” alone probably doesn’t capture a lot of potential intricacies of various proposals.
We should separate whether the view is well-motivated from whether it’s compatible with “ethics being about affecting persons”. It’s based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with “ethics being about affecting persons”.
We should also separate plausibility from whether it would follow on stricter interpretations of “ethics being about affecting persons”. An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same arguments you make for A+ over Z, so I think your arguments prove too much. For example,
Alice with welfare level 10 and 1 million people with welfare level 1 each
Alice with welfare level 4 and 1 million different people with welfare level 4 each
You said “Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+.” The same argument would support 1 over 2.
Then you said “Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?).” Similarly, I could say “Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there’s a minimum number of contingent people across outcomes (… so what?)”
So, similar arguments support narrow person-affecting views over wide ones.
I think ignoring irrelevant alternatives has some independent appeal. Dasgupta’s view does that at step 1, but not at step 2. So, it doesn’t always ignore them, but it ignores them more than necessitarianism does.
I can further motivate Dasgupta’s view, or something similar:
There are some “more objective” facts about axiology or what we should do that don’t depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these “more objective” facts. Hence something like step 1. But these facts can leave a lot of options incomparable or undominated/permissible. I think all complete, transitive and independent of irrelevant alternatives (IIA) views are kind of implausible (e.g. the impossibility theorems of Arrhenius). Still, there are some things the most plausible of these views can agree on, including that Z>A+.
Z>A+ follows from Harsanyi’s theorem, extensions to variable population cases and other utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it’s only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That’s nearly a consensus in favour of Z>A+, and the dissidents have more plausible counterparts that support Z>A+.
On the other hand, there’s more disagreement on A vs A+, and on A vs Z.
Whether or not this step is person-affecting could depend on what kinds of views we use or the facts we’re constrained by, but I’m less worried about that than what I think are plausible (to me) requirements for axiology.
After being constrained by the “more objective” facts in step 1, we should (or are at least allowed to) pick between remaining permissible options in favour of necessary people (or minimizing harm or some other person-affecting principle). Other people wouldn’t have reasonable impartial grounds for complaint with our decisions, because we already addressed the “more objective” impartial facts in 1.
If you were going to defend utilitarian necessitarianism, i.e. maximize the total utility of necessary people, you’d need to justify the utilitarian bit. But the most plausible justifications for the utilitarian bit would end up being justifications for Z>A+, unless you restrict them apparently arbitrarily. So then, you ask: am I a necessitarian first, or a utilitarian first? If you’re utilitarian first, you end up with something like Dasgupta’s view. If you’re a necessitarian first, then you end up with utilitarian necessitarianism.
Similarly if you substitute a different wide, anonymous, monotonic, non-anti-egalitarian view for the utilitarian bit.
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can’t infer much from it.
Well, there is a necessary number of “contingent people”, which seems similar to having necessary (identical) people. Since in both cases not creating anyone is not an option. Unlike in Huemer’s three choice case where A is an option.
I think there is a quite straightforward argument why IIA is false. The paradox arises because we seem to have a cycle of binary comparisons: A+ is better than A, Z is better than A+, A is better than Z. The issue here seems to be that this assumes we can just break down a three option comparison into three binary comparisons. Which is arguably false, since it can lead to cycles. And when we want to avoid cycles while keeping binary comparisons, we have to assume we do some of the binary choices “first” and thereby rule out one of the remaining ones, removing the cycle. So we need either a principled way of deciding on the “evaluation order” of the binary comparisons, or reject the assumption that “x compared to y” is necessarily the same as “x compared y, given z”. If the latter removes the cycle, that is.
Another case where IIA leads to an absurd result is preference aggregation. Assume three equally sized groups (1, 2, 3) have these individual preferences:
x≻y≻z
y≻z≻x
z≻x≻y
The obvious and obviously only correct aggregation would be x∼y∼z, i.e. indifference between the three options. Which is different from what would happen if you’d take out either one of three options and make it a binary choice, since each binary choice has a majority. So the “irrelevant” alternatives are not actually irrelevant, since they can determine a choice relevant global property like a cycle. So IIA is false, since it would lead to a cycle. This seems not unlike the cycle we get in the repugnant conclusion paradox, although there the solution is arguably not that all three options are equally good.
I don’t see why this would be better than doing other comparisons first. As I said, this is the strategy of solving three choices with binary comparisons, but in a particular order, so that we end up with two total comparisons instead of three, since we rule out one option early. The question is why doing this or that binary comparison first, rather than another one, would be better. If we insist on comparing A and Z first, we would obviously rule out Z first, so we end up only comparing A and A+, while the comparison A+ and Z is never made.
I can add any number of other options, as long as they respect the premises of your argument and are “unfair” to the necessary number of contingent people. What specific added complexity matters here and why?
I think you’d want to adjust your argument, replacing “present” with something like “the minimum number of contingent people” (and decide how to match counterparts if there are different numbers of contingent people). But this is moving to a less strict interpretation of “ethics being about affecting persons”. And then I could make your original complaint here against Dasgupta’s approach against the less strict wide interpretation.
But it’s not the same, and we can argue against it on a stricter interpretation. The difference seems significant, too: no specific contingent person is or would be made worse off. They’d have no grounds for complaint. If you can’t tell me for whom the outcome is worse, why should I care? (And then I can just deny each reason you give as not in line with my intuitions, e.g. ”… so what?”)
Stepping back, I’m not saying that wide views are wrong. I’m sympathetic to them. I also have some sympathy for (asymmetric) narrow views for roughly the reasons I just gave. My point is that your argument or the way you argued could prove too much if taken to be a very strong argument. You criticize Dasgupta’s view from a stricter interpretation, but we can also criticize wide views from a stricter interpretation.
I could also criticize presentism, necessitarianism and wide necessitarianism for being insensitive to the differences between A+ and Z for persons affected. The choice between A, A+ and Z is not just a choice between A and A+ or between A and Z. Between A+ and Z, the “extra” persons exist in both and are affected, even if A is available.
I think these are okay arguments, but IIA still has independent appeal, and here you need a specific argument for why Z vs A+ depends on the availability of A. If the argument is that we should do what’s best for necessary people (or necessary people + necessary number of contingents and resolving how to match counterparts), where the latter is defined relative to the set of available options, including “irrelevant options”, then you’re close to assuming IIA is false, rather than defending it. Why should we define that relative to the option set?
And there are also other resolutions compatible with IIA. We can revise our intuitions about some of the binary choices, possibly to incomparability, which is what Dasgupta’s view does in the first step.
Or we can just accept cycles.[1]
It is constrained by “more objective” impartial facts. Going straight for necessitarianism first seems too partial, and unfair in other ways (prioritarian, egalitarian, most plausible impartial standards). If you totally ignore the differences in welfare for the extra people between A+ and Z (not just outweighed, but taken to be irrelevant) when A is available, it seems you’re being infinitely partial to the necessary people.[2] Impartiality is somewhat more important to me than my person-affecting intuitions here.
I’m not saying this is a decisive argument or that there is any, but it’s one that appeals to my intuitions. If your person-affecting intuitions are more important or you don’t find necessitarianism or whatever objectionably partial, then you could be more inclined to compare another way.
We’d still have to make choices in practice, though, and a systematic procedure would violate a choice-based version of IIA (whichever we choose in the 3-option case of A, A+, Z would not be chosen in binary choice with one of the available options).
Or rejecting full aggregation, or aggregating in different ways, but we can consider other thought experiments for those possibilities.