In Parfit’s case, we have a good explanation for why you’re rationally required to bind yourself: doing so is best for you.
The more general explanation is that it’s best according to your preferences, which can also reflect or just be your moral views. It’s not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
don’t disvalue breaking promises (or don’t disvalue it enough), and
can’t bind yourself to paying and know this,
then you’d predict you won’t pay and be left behind.
Perhaps you’re morally required to bind yourself in Two-Shot Non-Identity, but why?
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It’s not about it being better for Bobby or Amy. It’s better “for people”, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of “A’s welfare” applies in that outcome, and its degree of satisfaction is just A’s lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of “the same kind”, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X ≿ Y (X is at least as good as Y).
If keeping promises matters in itself, then it’s better to make a promise you’ll keep than a promise you’ll break, all else equal.
With 1 (and assuming different people result in “the same kind” of welfare standards with comparable welfare), “Just Bobby” is better than “Just Amy”, because the moral standard of Bobby’s welfare would be more satisfied than the moral standard of Amy’s welfare.
This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of “the same kind”.
It’s not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isn’t better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
And suppose that (for whatever reason) you can’t bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. “Just Amy” can be impermissible ahead of time because “Just Bobby” is still available, and then become permissible after “Just Bobby” is no longer available. If Amy already exists as you assume, then “Just Bobby” is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if that’s it, I don’t find that very objectionable, and it’s similar to Parfit’s hitchhiker.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of “the same kind” across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S’s total welfare applies, but it’s only of “the same kind” for set of people with the same number of people.
The more general explanation is that it’s best according to your preferences, which can also reflect or just be your moral views. It’s not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
don’t disvalue breaking promises (or don’t disvalue it enough), and
can’t bind yourself to paying and know this,
then you’d predict you won’t pay and be left behind.
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It’s not about it being better for Bobby or Amy. It’s better “for people”, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of “A’s welfare” applies in that outcome, and its degree of satisfaction is just A’s lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of “the same kind”, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X ≿ Y (X is at least as good as Y).
If keeping promises matters in itself, then it’s better to make a promise you’ll keep than a promise you’ll break, all else equal.
With 1 (and assuming different people result in “the same kind” of welfare standards with comparable welfare), “Just Bobby” is better than “Just Amy”, because the moral standard of Bobby’s welfare would be more satisfied than the moral standard of Amy’s welfare.
This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of “the same kind”.
It’s not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isn’t better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. “Just Amy” can be impermissible ahead of time because “Just Bobby” is still available, and then become permissible after “Just Bobby” is no longer available. If Amy already exists as you assume, then “Just Bobby” is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if that’s it, I don’t find that very objectionable, and it’s similar to Parfit’s hitchhiker.
Like B-theory of time or eternalism.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of “the same kind” across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S’s total welfare applies, but it’s only of “the same kind” for set of people with the same number of people.