I think my issue with the argument in section 3 is that it puts real and hypothetical people on the same footing, which is the very thing that PAV rejects.
If you label the left half of the table “100 real people” and the right half “ten billion hypothetical people”, then from the perspective of a PAV in world A, B is preferable to A, but C is worse than B, because the hypothetical people don’t count. If you think we’ll end up in world B, then triggering world B is worth it because it makes existing people happy, but if you think world B will turn into world C later, then we’re back to neutral because ultimately it makes no difference to real people.
But if someone has already gone ahead and brought about world B, then we have a different equation: now both sides of the table are talking about real people, so C becomes preferable. The 10 billion don’t enter the moral equation until they already exist (or are sure to exist).
The other side to this I’d say is this: deciding not to bring someone into existence is always morally neutral. But if you do decide to bring someone into existence, then you have obligations towards them to make their life worth living.
Yes, nice points. If one is committed to contingent people not counting, then one has to say that C is worse than B. But it still seems to me like an implausible verdict, especially if one of B and C is going to be chosen (and hence those contingent people are going to become actual).
It seems like the resulting view also runs into problems of sequential choice. If B is best out of {A, B, C}, but C is best out of {B, C}, then perhaps what you’re required to do is initially choose B and then (once A is no longer available) later switch to C, even if doing so is costly. And that seems like a bad feature of a view, since you could have costlessly chosen C in your first choice.
I think you’d still just choose A at the start here if you’re considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)
If you don’t use backwards induction, you’re going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.
interesting article!
I think my issue with the argument in section 3 is that it puts real and hypothetical people on the same footing, which is the very thing that PAV rejects.
If you label the left half of the table “100 real people” and the right half “ten billion hypothetical people”, then from the perspective of a PAV in world A, B is preferable to A, but C is worse than B, because the hypothetical people don’t count. If you think we’ll end up in world B, then triggering world B is worth it because it makes existing people happy, but if you think world B will turn into world C later, then we’re back to neutral because ultimately it makes no difference to real people.
But if someone has already gone ahead and brought about world B, then we have a different equation: now both sides of the table are talking about real people, so C becomes preferable. The 10 billion don’t enter the moral equation until they already exist (or are sure to exist).
The other side to this I’d say is this: deciding not to bring someone into existence is always morally neutral. But if you do decide to bring someone into existence, then you have obligations towards them to make their life worth living.
Yes, nice points. If one is committed to contingent people not counting, then one has to say that C is worse than B. But it still seems to me like an implausible verdict, especially if one of B and C is going to be chosen (and hence those contingent people are going to become actual).
It seems like the resulting view also runs into problems of sequential choice. If B is best out of {A, B, C}, but C is best out of {B, C}, then perhaps what you’re required to do is initially choose B and then (once A is no longer available) later switch to C, even if doing so is costly. And that seems like a bad feature of a view, since you could have costlessly chosen C in your first choice.
I think you’d still just choose A at the start here if you’re considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)
If you don’t use backwards induction, you’re going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.