Views of this kind give more plausible verdicts in the previous cases – both the lever case and the enquiring friend case – but any exoneration is partial at best. The verdict in the friend case remains counterintuitive when we stipulate that your friend foresaw the choices that they would face. And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.
Rather than intentions as mere plans, I imagine this more like precommitment (maybe resolute choice?[1]), i.e. binding yourself (psychologically or physically) to deciding a certain way in the future and so preventing your future self from deviating from your plan. Precommitment is also a natural solution to avoid being left behind as Parfit’s hitchhiker:
Suppose you’re out in the desert, running out of water, and soon to die—when someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what’s more, the driver is Paul Ekman, who’s really, really good at reading facial microexpressions. The driver says, “Well, I’ll convey you to town if it’s in my interest to do so—so will you give me $100 from an ATM when we reach town?”
Now of course you wish you could answer “Yes”, but as an ideal game theorist yourself, you realize that, once you actually reach town, you’ll have no further motive to pay off the driver. “Yes,” you say. “You’re lying,” says the driver, and drives off leaving you to die.
In this case, your expectation to pay in town has to be accurate to ensure you get the ride, and if you can bind yourself to paying, then it will be accurate.[2]
I think this also gives us a solution to this point:
And if permissibility doesn’t depend on past choices, then it’s also wrong to pull the second lever in cases where we didn’t previously pull the first lever.
If you created Amy and had failed to bind yourself into creating Bob by the time you created Amy, then the mistake was made in the past, not now, and you’re now free to create or not create Bobby. After having created Amy, you have to condition on the state of the world (or your evidence about it), in which Amy already exists. She is no longer contingent, only Bob is.
Similarly, with Parfit’s hitchhiker, the mistake was made when negotiating before being driven, if you didn’t bind yourself to paying when you get to town. But if you somehow already made it into town, then you don’t have to pay the driver anymore, and it’s better not to (by assumption).[3]
I originally only wrote resolute choice, not precommitment, and then edited it to precommitment. I think precommitment is clearer and what I intended to describe. I’m less sure about resolute choice, but it is related.
I imagine you can devise similar problems for impartial views. Both you and the driver could both be impartial or even entirely unselfish, but have quite different moral views about what’s best and disagree on how to best use your $100. Then this becomes a problem of cooperation or moral trade.
If the driver is in fact 100% accurate, then you should expect to pay if you made it into town; you won’t actually be empirically free to choose either way. Maybe the driver isn’t 100% accurate, though, so you got lucky this time, and now don’t have to pay.
In Parfit’s case, we have a good explanation for why you’re rationally required to bind yourself: doing so is best for you.
Perhaps you’re morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isn’t better for Amy. And if it’s better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then there’s pressure to conclude that we’re required to create Bobby in Just Bobby, contrary to the claims of PAVs.
And suppose that (for whatever reason) you can’t bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that you’re required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that you’re permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldn’t be permitted to pull both levers if they were lashed together).
In Parfit’s case, we have a good explanation for why you’re rationally required to bind yourself: doing so is best for you.
The more general explanation is that it’s best according to your preferences, which can also reflect or just be your moral views. It’s not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
don’t disvalue breaking promises (or don’t disvalue it enough), and
can’t bind yourself to paying and know this,
then you’d predict you won’t pay and be left behind.
Perhaps you’re morally required to bind yourself in Two-Shot Non-Identity, but why?
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It’s not about it being better for Bobby or Amy. It’s better “for people”, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of “A’s welfare” applies in that outcome, and its degree of satisfaction is just A’s lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of “the same kind”, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X ≿ Y (X is at least as good as Y).
If keeping promises matters in itself, then it’s better to make a promise you’ll keep than a promise you’ll break, all else equal.
With 1 (and assuming different people result in “the same kind” of welfare standards with comparable welfare), “Just Bobby” is better than “Just Amy”, because the moral standard of Bobby’s welfare would be more satisfied than the moral standard of Amy’s welfare.
This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of “the same kind”.
It’s not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isn’t better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
And suppose that (for whatever reason) you can’t bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. “Just Amy” can be impermissible ahead of time because “Just Bobby” is still available, and then become permissible after “Just Bobby” is no longer available. If Amy already exists as you assume, then “Just Bobby” is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if that’s it, I don’t find that very objectionable, and it’s similar to Parfit’s hitchhiker.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of “the same kind” across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S’s total welfare applies, but it’s only of “the same kind” for set of people with the same number of people.
Also, if your intention wasn’t really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.
In 5.2.3. Intermediate wide views, you write:
Rather than intentions as mere plans, I imagine this more like precommitment (maybe resolute choice?[1]), i.e. binding yourself (psychologically or physically) to deciding a certain way in the future and so preventing your future self from deviating from your plan. Precommitment is also a natural solution to avoid being left behind as Parfit’s hitchhiker:
In this case, your expectation to pay in town has to be accurate to ensure you get the ride, and if you can bind yourself to paying, then it will be accurate.[2]
I think this also gives us a solution to this point:
If you created Amy and had failed to bind yourself into creating Bob by the time you created Amy, then the mistake was made in the past, not now, and you’re now free to create or not create Bobby. After having created Amy, you have to condition on the state of the world (or your evidence about it), in which Amy already exists. She is no longer contingent, only Bob is.
Similarly, with Parfit’s hitchhiker, the mistake was made when negotiating before being driven, if you didn’t bind yourself to paying when you get to town. But if you somehow already made it into town, then you don’t have to pay the driver anymore, and it’s better not to (by assumption).[3]
I originally only wrote resolute choice, not precommitment, and then edited it to precommitment. I think precommitment is clearer and what I intended to describe. I’m less sure about resolute choice, but it is related.
I imagine you can devise similar problems for impartial views. Both you and the driver could both be impartial or even entirely unselfish, but have quite different moral views about what’s best and disagree on how to best use your $100. Then this becomes a problem of cooperation or moral trade.
If the driver is in fact 100% accurate, then you should expect to pay if you made it into town; you won’t actually be empirically free to choose either way. Maybe the driver isn’t 100% accurate, though, so you got lucky this time, and now don’t have to pay.
In Parfit’s case, we have a good explanation for why you’re rationally required to bind yourself: doing so is best for you.
Perhaps you’re morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isn’t better for Amy. And if it’s better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then there’s pressure to conclude that we’re required to create Bobby in Just Bobby, contrary to the claims of PAVs.
And suppose that (for whatever reason) you can’t bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that you’re required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that you’re permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldn’t be permitted to pull both levers if they were lashed together).
The more general explanation is that it’s best according to your preferences, which can also reflect or just be your moral views. It’s not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
don’t disvalue breaking promises (or don’t disvalue it enough), and
can’t bind yourself to paying and know this,
then you’d predict you won’t pay and be left behind.
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It’s not about it being better for Bobby or Amy. It’s better “for people”, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of “A’s welfare” applies in that outcome, and its degree of satisfaction is just A’s lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of “the same kind”, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X ≿ Y (X is at least as good as Y).
If keeping promises matters in itself, then it’s better to make a promise you’ll keep than a promise you’ll break, all else equal.
With 1 (and assuming different people result in “the same kind” of welfare standards with comparable welfare), “Just Bobby” is better than “Just Amy”, because the moral standard of Bobby’s welfare would be more satisfied than the moral standard of Amy’s welfare.
This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of “the same kind”.
It’s not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isn’t better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. “Just Amy” can be impermissible ahead of time because “Just Bobby” is still available, and then become permissible after “Just Bobby” is no longer available. If Amy already exists as you assume, then “Just Bobby” is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if that’s it, I don’t find that very objectionable, and it’s similar to Parfit’s hitchhiker.
Like B-theory of time or eternalism.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of “the same kind” across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S’s total welfare applies, but it’s only of “the same kind” for set of people with the same number of people.
Also, if your intention wasn’t really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.