Views of this kind give more plausible verdicts in the previous cases â both the lever case and the enquiring friend case â but any exoneration is partial at best. The verdict in the friend case remains counterintuitive when we stipulate that your friend foresaw the choices that they would face. And although intentions are often relevant to questions of blameworthiness, Iâm doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.
Rather than intentions as mere plans, I imagine this more like precommitment (maybe resolute choice?[1]), i.e. binding yourself (psychologically or physically) to deciding a certain way in the future and so preventing your future self from deviating from your plan. Precommitment is also a natural solution to avoid being left behind as Parfitâs hitchhiker:
Suppose youâre out in the desert, running out of water, and soon to dieâwhen someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and whatâs more, the driver is Paul Ekman, whoâs really, really good at reading facial microexpressions. The driver says, âWell, Iâll convey you to town if itâs in my interest to do soâso will you give me $100 from an ATM when we reach town?â
Now of course you wish you could answer âYesâ, but as an ideal game theorist yourself, you realize that, once you actually reach town, youâll have no further motive to pay off the driver. âYes,â you say. âYouâre lying,â says the driver, and drives off leaving you to die.
In this case, your expectation to pay in town has to be accurate to ensure you get the ride, and if you can bind yourself to paying, then it will be accurate.[2]
I think this also gives us a solution to this point:
And if permissibility doesnât depend on past choices, then itâs also wrong to pull the second lever in cases where we didnât previously pull the first lever.
If you created Amy and had failed to bind yourself into creating Bob by the time you created Amy, then the mistake was made in the past, not now, and youâre now free to create or not create Bobby. After having created Amy, you have to condition on the state of the world (or your evidence about it), in which Amy already exists. She is no longer contingent, only Bob is.
Similarly, with Parfitâs hitchhiker, the mistake was made when negotiating before being driven, if you didnât bind yourself to paying when you get to town. But if you somehow already made it into town, then you donât have to pay the driver anymore, and itâs better not to (by assumption).[3]
I originally only wrote resolute choice, not precommitment, and then edited it to precommitment. I think precommitment is clearer and what I intended to describe. Iâm less sure about resolute choice, but it is related.
I imagine you can devise similar problems for impartial views. Both you and the driver could both be impartial or even entirely unselfish, but have quite different moral views about whatâs best and disagree on how to best use your $100. Then this becomes a problem of cooperation or moral trade.
If the driver is in fact 100% accurate, then you should expect to pay if you made it into town; you wonât actually be empirically free to choose either way. Maybe the driver isnât 100% accurate, though, so you got lucky this time, and now donât have to pay.
In Parfitâs case, we have a good explanation for why youâre rationally required to bind yourself: doing so is best for you.
Perhaps youâre morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isnât better for Amy. And if itâs better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then thereâs pressure to conclude that weâre required to create Bobby in Just Bobby, contrary to the claims of PAVs.
And suppose that (for whatever reason) you canât bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that youâre required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that youâre permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldnât be permitted to pull both levers if they were lashed together).
In Parfitâs case, we have a good explanation for why youâre rationally required to bind yourself: doing so is best for you.
The more general explanation is that itâs best according to your preferences, which can also reflect or just be your moral views. Itâs not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/âpreferences with your own money than the driver would to further your own values/âpreferences with your money
donât disvalue breaking promises (or donât disvalue it enough), and
canât bind yourself to paying and know this,
then youâd predict you wonât pay and be left behind.
Perhaps youâre morally required to bind yourself in Two-Shot Non-Identity, but why?
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. Itâs not about it being better for Bobby or Amy. Itâs better âfor peopleâ, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of âAâs welfareâ applies in that outcome, and its degree of satisfaction is just Aâs lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of âthe same kindâ, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X âż Y (X is at least as good as Y).
If keeping promises matters in itself, then itâs better to make a promise youâll keep than a promise youâll break, all else equal.
With 1 (and assuming different people result in âthe same kindâ of welfare standards with comparable welfare), âJust Bobbyâ is better than âJust Amyâ, because the moral standard of Bobbyâs welfare would be more satisfied than the moral standard of Amyâs welfare.
This is basically Pareto for standards, but anonymous/âinsensitive to the specific identities of standards, as long as they are of âthe same kindâ.
Itâs not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isnât better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
And suppose that (for whatever reason) you canât bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. âJust Amyâ can be impermissible ahead of time because âJust Bobbyâ is still available, and then become permissible after âJust Bobbyâ is no longer available. If Amy already exists as you assume, then âJust Bobbyâ is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if thatâs it, I donât find that very objectionable, and itâs similar to Parfitâs hitchhiker.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of âthe same kindâ across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of Sâs total welfare applies, but itâs only of âthe same kindâ for set of people with the same number of people.
Also, if your intention wasnât really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.
In 5.2.3. Intermediate wide views, you write:
Rather than intentions as mere plans, I imagine this more like precommitment (maybe resolute choice?[1]), i.e. binding yourself (psychologically or physically) to deciding a certain way in the future and so preventing your future self from deviating from your plan. Precommitment is also a natural solution to avoid being left behind as Parfitâs hitchhiker:
In this case, your expectation to pay in town has to be accurate to ensure you get the ride, and if you can bind yourself to paying, then it will be accurate.[2]
I think this also gives us a solution to this point:
If you created Amy and had failed to bind yourself into creating Bob by the time you created Amy, then the mistake was made in the past, not now, and youâre now free to create or not create Bobby. After having created Amy, you have to condition on the state of the world (or your evidence about it), in which Amy already exists. She is no longer contingent, only Bob is.
Similarly, with Parfitâs hitchhiker, the mistake was made when negotiating before being driven, if you didnât bind yourself to paying when you get to town. But if you somehow already made it into town, then you donât have to pay the driver anymore, and itâs better not to (by assumption).[3]
I originally only wrote resolute choice, not precommitment, and then edited it to precommitment. I think precommitment is clearer and what I intended to describe. Iâm less sure about resolute choice, but it is related.
I imagine you can devise similar problems for impartial views. Both you and the driver could both be impartial or even entirely unselfish, but have quite different moral views about whatâs best and disagree on how to best use your $100. Then this becomes a problem of cooperation or moral trade.
If the driver is in fact 100% accurate, then you should expect to pay if you made it into town; you wonât actually be empirically free to choose either way. Maybe the driver isnât 100% accurate, though, so you got lucky this time, and now donât have to pay.
In Parfitâs case, we have a good explanation for why youâre rationally required to bind yourself: doing so is best for you.
Perhaps youâre morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isnât better for Amy. And if itâs better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then thereâs pressure to conclude that weâre required to create Bobby in Just Bobby, contrary to the claims of PAVs.
And suppose that (for whatever reason) you canât bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that youâre required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that youâre permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldnât be permitted to pull both levers if they were lashed together).
The more general explanation is that itâs best according to your preferences, which can also reflect or just be your moral views. Itâs not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you
expect to do more to further your own values/âpreferences with your own money than the driver would to further your own values/âpreferences with your money
donât disvalue breaking promises (or donât disvalue it enough), and
canât bind yourself to paying and know this,
then youâd predict you wonât pay and be left behind.
Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. Itâs not about it being better for Bobby or Amy. Itâs better âfor peopleâ, understood in wide person-affecting terms.
One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):
If a person A existed, exists or will exist in an outcome,[1] then the moral standard of âAâs welfareâ applies in that outcome, and its degree of satisfaction is just Aâs lifetime (or future) welfare.
Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of âthe same kindâ, 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X âż Y (X is at least as good as Y).
If keeping promises matters in itself, then itâs better to make a promise youâll keep than a promise youâll break, all else equal.
With 1 (and assuming different people result in âthe same kindâ of welfare standards with comparable welfare), âJust Bobbyâ is better than âJust Amyâ, because the moral standard of Bobbyâs welfare would be more satisfied than the moral standard of Amyâs welfare.
This is basically Pareto for standards, but anonymous/âinsensitive to the specific identities of standards, as long as they are of âthe same kindâ.
Itâs not better (or worse) for a moral standard to apply than to not apply, all else equal.
So creating Bobby isnât better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
This is similar to Existence Anticomparativism.
I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. âJust Amyâ can be impermissible ahead of time because âJust Bobbyâ is still available, and then become permissible after âJust Bobbyâ is no longer available. If Amy already exists as you assume, then âJust Bobbyâ is no longer available. I explain more here.
I guess that means it depends on lever-lashing? But if thatâs it, I donât find that very objectionable, and itâs similar to Parfitâs hitchhiker.
Like B-theory of time or eternalism.
This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of âthe same kindâ across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of Sâs total welfare applies, but itâs only of âthe same kindâ for set of people with the same number of people.
Also, if your intention wasnât really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.