In section 3, you illustrate with Tomiās argument:
One hundred people
Ten billion different people
A
40
-
B
41
41
C
40
100
And in 3.1, you write:
How might advocates of PAVs respond to Tomiās argument? One possibility is to claim that betterness is option-set dependent: whether an outcome X is better than an outcome Y can depend on what other outcomes are available as options to choose. In particular, advocates of PAVs could claim:
B is better than A when B and A are the only options
B is not better than A when C is also an option.
And advocates of PAVs could defend the second bullet-point in the following way: when C is available, B harms (or is unjust to) the ten billion extra people, because these extra people are better off in C. And this harm/āinjustice prevents B from being better than A.
And in 3.2 explain why this isnāt a good response. I mostly agree.
I think a better response is based on reasoning like the following:
If I were a member of A (and the hundred people are the same hundred people in A, B and C) and were to choose to bring about B, then I would realize that C would have been better for all of the now necessary people (including the additional ten billion), so would switch to C if able, or regret picking B over C. But C is worse than A for necessary people, so anticipating this reasoning from B to C, I rule out B ahead of time to prevent it.
In this sense, we can say B is not better than A when C is also an option.[1]
Something like Dasguptaās method (Dasgupta, 1994 and Broome, 1996) can extend this. The idea is to first rule out any option that is impartially worse in a binary choice (pairwise comparison) than another option with exactly the same set of people (or the same number of people, under a wide view). This rules out B, because C is better than it. This leaves a binary choice between A and C. Then you pick any that is best for the necessary people (or rank them based on how good they are for necessary people), and A and C are equivalent now, so either is fine.
Taken as an argument that B isnāt better than A, this response doesnāt seem so plausible to me. In favour of B being better than A, we can point out: B is better than A for all of the necessary people, and pretty good for all the non-necessary people. Against B being better than A, we can say something like: Iād regret picking B over C. The former rationale seems more convincing to me, especially since it seems like you could also make a more direct, regret-based case for B being better than A: Iād regret picking A over B.
But taken as an argument that A is permissible, this response seems more plausible. Then Iād want to appeal to my arguments against deontic PAVs.
A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasguptaās approach:
Choose between A and B
If you chose B in 1, choose between B and C.
or
Choose between A and (B or C).
If you chose B or C in 1, choose between B and C.
In either case, āpicking Bā (including āpicking B or Cā) in 1 means actually picking C, if you know youād pick C in 2, and then use backwards induction.
The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, which is equivalent to A once weāve ruled out B. Itās not just facts about direct binary choices that decide rankings (ābetternessā), but the reasoning process as a whole and how we interpret the steps.
At any rate, I donāt think itās that important whether we interpret the rankings as ābetternessā, as usually understood, with its usual sensitivities and only those. I think youāve set up a kind of false dichotomy between permissibility and betterness as usually understood. A third option is rankings not intended to be interpeted as betterness as usual. Or, we could interpret betterness more broadly.
Having separate rankings of options apart from or instead of strict permissibility facts can still be useful, say because we want to adopt something like a scalar consequentialist view over those rankings. I still want to say that C is ābetterā than B, which is consistent with Dasguptaās approach. There could be other options like A, with the same 100 people, but everyone gets 39 utility instead of 40, and another where everyone gets 20 utility instead. I still want to say 39 is better than 20, and ending up with 39 instead of 40 is not so bad, compared to ending up with 20, which would be a lot worse.
In section 3, you illustrate with Tomiās argument:
And in 3.1, you write:
And in 3.2 explain why this isnāt a good response. I mostly agree.
I think a better response is based on reasoning like the following:
If I were a member of A (and the hundred people are the same hundred people in A, B and C) and were to choose to bring about B, then I would realize that C would have been better for all of the now necessary people (including the additional ten billion), so would switch to C if able, or regret picking B over C. But C is worse than A for necessary people, so anticipating this reasoning from B to C, I rule out B ahead of time to prevent it.
In this sense, we can say B is not better than A when C is also an option.[1]
Something like Dasguptaās method (Dasgupta, 1994 and Broome, 1996) can extend this. The idea is to first rule out any option that is impartially worse in a binary choice (pairwise comparison) than another option with exactly the same set of people (or the same number of people, under a wide view). This rules out B, because C is better than it. This leaves a binary choice between A and C. Then you pick any that is best for the necessary people (or rank them based on how good they are for necessary people), and A and C are equivalent now, so either is fine.
(This can also be made asymmetric.)
Or B is not more choiceworthy than A when C is also an option, if we want to avoid axiological claims?
Taken as an argument that B isnāt better than A, this response doesnāt seem so plausible to me. In favour of B being better than A, we can point out: B is better than A for all of the necessary people, and pretty good for all the non-necessary people. Against B being better than A, we can say something like: Iād regret picking B over C. The former rationale seems more convincing to me, especially since it seems like you could also make a more direct, regret-based case for B being better than A: Iād regret picking A over B.
But taken as an argument that A is permissible, this response seems more plausible. Then Iād want to appeal to my arguments against deontic PAVs.
A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasguptaās approach:
Choose between A and B
If you chose B in 1, choose between B and C.
or
Choose between A and (B or C).
If you chose B or C in 1, choose between B and C.
In either case, āpicking Bā (including āpicking B or Cā) in 1 means actually picking C, if you know youād pick C in 2, and then use backwards induction.
The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, which is equivalent to A once weāve ruled out B. Itās not just facts about direct binary choices that decide rankings (ābetternessā), but the reasoning process as a whole and how we interpret the steps.
At any rate, I donāt think itās that important whether we interpret the rankings as ābetternessā, as usually understood, with its usual sensitivities and only those. I think youāve set up a kind of false dichotomy between permissibility and betterness as usually understood. A third option is rankings not intended to be interpeted as betterness as usual. Or, we could interpret betterness more broadly.
Having separate rankings of options apart from or instead of strict permissibility facts can still be useful, say because we want to adopt something like a scalar consequentialist view over those rankings. I still want to say that C is ābetterā than B, which is consistent with Dasguptaās approach. There could be other options like A, with the same 100 people, but everyone gets 39 utility instead of 40, and another where everyone gets 20 utility instead. I still want to say 39 is better than 20, and ending up with 39 instead of 40 is not so bad, compared to ending up with 20, which would be a lot worse.