(Replying back at the initial comment to reduce thread depth and in case this is a more important response for people to see.)
I understand that youâre explaining why you donât really think itâs well modelled as a two-envelope problem, but Iâm not sure whether youâre biting the bullet that youâre predictably paying some utility in unnecessary ways (in this admittedly convoluted hypothetical), or if you donât think thereâs a bullet there to bite, or something else?
Sorry, yes, I realized I missed this bit (EDIT: and which was the main bit...). I guess then I would say your options are:
Bite the bullet (and do moral trade).
Entertain both the human-relative stance and the alien-relative stance even after finding out which you are,[1] say due to epistemic modesty. I assume these stances wonât be comparable on a common scale, at least not without very arbitrary assumptions, so youâd use some other approach to moral uncertainty.
Make some very arbitrary assumptions to make the problem go away.
I think 1 and 2 are both decent and defensible positions. I donât think the bullet to bite in 1 is really much of a bullet at all.
From your top-level comment:
Then itâs revealed which you are, you remember all your experiences and can reason about how big a deal they are â and then you will predictably pay some utility in order to benefit the other species more. It similarly looks like itâs a mistake to predictably have this behaviour (in the sense that, if humans and aliens are equally likely to be put in this kind of construed situation, then the world would be predictably better off if nobody had this behaviour), and I donât really feel like youâve addressed this.
The aliens and humans just disagree about whatâs best, and could coordinate (moral trade) to avoid both incurring unnecessary costs from relatively prioritizing each other. They have different epistemic states and/âor preferences, including moral preferences/âintuitions. Your thought experiment decides what evidence different individuals will gather (at least on my bullet-biting interpretation). You end up with similar problems generally if you decide behind a veil of ignorance what evidence different individuals are going to gather (e.g. fix some facts about the world and decide ahead of time who will discover which ones) and epistemic states theyâd end up in. Even if they start from the same prior.
Maybe one individual comes to believe bednets are the best for helping humans, while someone else comes to believe deworming is. If the bednetter somehow ends up with deworming pills, theyâll want to sell them to buy bednets. If the dewormer ends up with bednets, theyâll want to sell them to buy deworming pills. They could both do this at deadweight loss in terms of pills delivered, bednets delivered, cash and/âor total utility. Instead, they could just directly trade with each other, or coordinate and agree to just deliver what they have directly or to the appropriate third party.
EDIT: Now, you might say they can just share evidence and then converge in beliefs. That seems fair for the dewormer and bednetter, but itâs not currently possible for me to fully explain the human experience of suffering to an alien, or to give an alien access to that experience. If and when that does become possible, weâd be able to agree much more.
Another illustration: suppose you donât know whether youâll prefer apples or oranges. You try both. From then on, youâre going to predictably pay more for one than the other. Some other people will do the opposite. Whenever an apple-preferrer ends up with an orange for whatever reason, they would be inclined to trade it away to get an apple. Symmetrically for the orange-preferrer. They might both do so together at deadweight loss and benefit from directly trading with each other.
This doesnât seem like much of a bullet to bite.
I donât think that the apples and oranges case is analogous, since then itâs really about different preferences. In this case Iâm assuming that all the parties have the same ultimate preferences (to make more good morally relevant good experiences and fewer bad ones), but different pieces of evidence.
I do think the deworming and bednets case is analogous. Suppose the two of us are in a room before we go out to gather evidence. We agree that there is a 50% chance that bednets are twice as good as deworming, and a 50% chance that deworming is twice as good. We neither of us have a great idea how good either of them is.
One of us goes off to study bednets. After that they have a reasonable sense of how good bednets are, and predictably prefer deworming (for 2-envelope reasons). The other goes to study deworming, and afterwards predictably prefers bednets. At this point we each have an expertise which makes our work 10% more effective on the thing weâre expert in, but we each choose to eschew our expertise as the benefit from switching envelopes is higher.
Weâd like to morally trade so that we each stay working in our domain of expertise. But suppose that later weâll be causally disconnected and unable to engage in moral trade. Weâd still like to commit at the start to a trade where neither party switches.
Now suppose that thereâs only you, and youâre about to flip a coin to decide if youâll go to study bednets or deworming. Youâd prefer to commit to not then switching to the other thing.
But suppose you forgot to make that commitment, and are only thinking about this after having flipped the coin and discovered youâre about to study bednets. Your epistemic position hasnât yet changed, only your expectation of future evidence. Surely(?) youâd still want to make the commitment at this point?
Now if you only think about it later, having studied bednets, Iâm imagining that you think âwell I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so Iâm glad I didnât commitâ. Is that right? (I prefer to act as though Iâd made the commitment I predictably would have wanted to make.)
Now suppose that thereâs only you, and youâre about to flip a coin to decide if youâll go to study bednets or deworming. Youâd prefer to commit to not then switching to the other thing.
Maybe? Iâm not sure Iâd want to constrain my future self this way, if it wonât seem best/ârational later. I donât very strongly object to commitments in principle, and it seems like the right thing to do in some cases, like Parfitâs hitchhiker. However, those assume the same preferences/âscale after, and in the two envelopes problem, we may not be able to assume that. It could look more like preference change.
In this case, it looks like youâre committing to something you will predictably later regret either way it goes (because youâll want to switch), which seems kind of irrational. It looks like violating the sure-thing principle. Plus, either way it goes, it looks like youâll fail to follow your own preferences later, and it will seem irrational then. Russell and Isaacs (2021) and Gustafsson (2022) also argue similarly against resolute choice strategies.
Iâm more sympathetic to acausal trade with other beings that could simultaneously exist with you (even if you donât know ahead of time whether youâll find bednets or deworming better in expectation), if and because youâll expect the world to be better off for it at every step: ahead of time, just before you follow through and after you follow through. Thereâs no expected regret. In an infinite multiverse (or a non-negligible chance of one), we should expect such counterparts to exist, though, so plausibly should do the acausal trade.
Also, I think youâd want to commit ahead of time to a more flexible policy for switching that depends on the specific evidence youâll gather.[1]
Now if you only think about it later, having studied bednets, Iâm imagining that you think âwell I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so Iâm glad I didnât commitâ. Is that right? (I prefer to act as though Iâd made the commitment I predictably would have wanted to make.)
Ya, that seems mostly right on first intuition.
However, acausal trade with counterparts in a multiverse still seems kind of compelling.
Also, I see some other appeal in favour of committing ahead of time to stick with whatever you study (and generally making the commitment earlier, too, contra what I say above in this comment): you know thereâs evidence you could have gathered that would tell you not to switch, because you know you would have changed your mind if you did, even if you wonât gather it anymore. Your knowledge of the existence of this evidence is evidence that supports not switching, even if you donât know the specifics. It seems like you shouldnât ignore that. Maybe it doesnât go all the way to support committing to sticking with your current expertise, because you can favour the more specific evidence you actually have, but maybe you should update hard enough on it?
This seems like it could avoid both the ex ante and ex post regret so far. But, still you either:
canât be an EU maximizer, and so youâll be vulnerable to money pump arguments anyway or abandon completeness and often be silent on what to do (e.g. multi-utility representations), or
have to unjustifiably fix a single scale and prior over it ahead of time.
The same could apply to humans vs aliens. Even if weâre not behind the veil of ignorance now and never were, thereâs information that weâd be ignoring: what real or hypothetical aliens would believe and the real or hypothetical existence of evidence that supports their stance.
But, itâs also really weird to consider the stances of hypothetical aliens. Itâs also weird in a different way if you imagine finding out what itâs like to be a chicken and suffer like a chicken.
Suppose youâre justifiably sure that each intervention is at least not net negative (whether or not you have a single scale and prior). But then you find out bednets have no (or tiny) impact. I think it would be reasonable to switch to deworming at some cost. Deworming could be less effective than you thought ahead of time, but no impact is as bad as it gets given your credences ahead of time.
(Replying back at the initial comment to reduce thread depth and in case this is a more important response for people to see.)
Sorry, yes, I realized I missed this bit (EDIT: and which was the main bit...). I guess then I would say your options are:
Bite the bullet (and do moral trade).
Entertain both the human-relative stance and the alien-relative stance even after finding out which you are,[1] say due to epistemic modesty. I assume these stances wonât be comparable on a common scale, at least not without very arbitrary assumptions, so youâd use some other approach to moral uncertainty.
Make some very arbitrary assumptions to make the problem go away.
I think 1 and 2 are both decent and defensible positions. I donât think the bullet to bite in 1 is really much of a bullet at all.
From your top-level comment:
The aliens and humans just disagree about whatâs best, and could coordinate (moral trade) to avoid both incurring unnecessary costs from relatively prioritizing each other. They have different epistemic states and/âor preferences, including moral preferences/âintuitions. Your thought experiment decides what evidence different individuals will gather (at least on my bullet-biting interpretation). You end up with similar problems generally if you decide behind a veil of ignorance what evidence different individuals are going to gather (e.g. fix some facts about the world and decide ahead of time who will discover which ones) and epistemic states theyâd end up in. Even if they start from the same prior.
Maybe one individual comes to believe bednets are the best for helping humans, while someone else comes to believe deworming is. If the bednetter somehow ends up with deworming pills, theyâll want to sell them to buy bednets. If the dewormer ends up with bednets, theyâll want to sell them to buy deworming pills. They could both do this at deadweight loss in terms of pills delivered, bednets delivered, cash and/âor total utility. Instead, they could just directly trade with each other, or coordinate and agree to just deliver what they have directly or to the appropriate third party.
EDIT: Now, you might say they can just share evidence and then converge in beliefs. That seems fair for the dewormer and bednetter, but itâs not currently possible for me to fully explain the human experience of suffering to an alien, or to give an alien access to that experience. If and when that does become possible, weâd be able to agree much more.
Another illustration: suppose you donât know whether youâll prefer apples or oranges. You try both. From then on, youâre going to predictably pay more for one than the other. Some other people will do the opposite. Whenever an apple-preferrer ends up with an orange for whatever reason, they would be inclined to trade it away to get an apple. Symmetrically for the orange-preferrer. They might both do so together at deadweight loss and benefit from directly trading with each other.
This doesnât seem like much of a bullet to bite.
Or your best approximations of each, given youâll only have direct access to one.
I donât think that the apples and oranges case is analogous, since then itâs really about different preferences. In this case Iâm assuming that all the parties have the same ultimate preferences (to make more good morally relevant good experiences and fewer bad ones), but different pieces of evidence.
I do think the deworming and bednets case is analogous. Suppose the two of us are in a room before we go out to gather evidence. We agree that there is a 50% chance that bednets are twice as good as deworming, and a 50% chance that deworming is twice as good. We neither of us have a great idea how good either of them is.
One of us goes off to study bednets. After that they have a reasonable sense of how good bednets are, and predictably prefer deworming (for 2-envelope reasons). The other goes to study deworming, and afterwards predictably prefers bednets. At this point we each have an expertise which makes our work 10% more effective on the thing weâre expert in, but we each choose to eschew our expertise as the benefit from switching envelopes is higher.
Weâd like to morally trade so that we each stay working in our domain of expertise. But suppose that later weâll be causally disconnected and unable to engage in moral trade. Weâd still like to commit at the start to a trade where neither party switches.
Now suppose that thereâs only you, and youâre about to flip a coin to decide if youâll go to study bednets or deworming. Youâd prefer to commit to not then switching to the other thing.
But suppose you forgot to make that commitment, and are only thinking about this after having flipped the coin and discovered youâre about to study bednets. Your epistemic position hasnât yet changed, only your expectation of future evidence. Surely(?) youâd still want to make the commitment at this point?
Now if you only think about it later, having studied bednets, Iâm imagining that you think âwell I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so Iâm glad I didnât commitâ. Is that right? (I prefer to act as though Iâd made the commitment I predictably would have wanted to make.)
Maybe? Iâm not sure Iâd want to constrain my future self this way, if it wonât seem best/ârational later. I donât very strongly object to commitments in principle, and it seems like the right thing to do in some cases, like Parfitâs hitchhiker. However, those assume the same preferences/âscale after, and in the two envelopes problem, we may not be able to assume that. It could look more like preference change.
In this case, it looks like youâre committing to something you will predictably later regret either way it goes (because youâll want to switch), which seems kind of irrational. It looks like violating the sure-thing principle. Plus, either way it goes, it looks like youâll fail to follow your own preferences later, and it will seem irrational then. Russell and Isaacs (2021) and Gustafsson (2022) also argue similarly against resolute choice strategies.
Iâm more sympathetic to acausal trade with other beings that could simultaneously exist with you (even if you donât know ahead of time whether youâll find bednets or deworming better in expectation), if and because youâll expect the world to be better off for it at every step: ahead of time, just before you follow through and after you follow through. Thereâs no expected regret. In an infinite multiverse (or a non-negligible chance of one), we should expect such counterparts to exist, though, so plausibly should do the acausal trade.
Also, I think youâd want to commit ahead of time to a more flexible policy for switching that depends on the specific evidence youâll gather.[1]
Ya, that seems mostly right on first intuition.
However, acausal trade with counterparts in a multiverse still seems kind of compelling.
Also, I see some other appeal in favour of committing ahead of time to stick with whatever you study (and generally making the commitment earlier, too, contra what I say above in this comment): you know thereâs evidence you could have gathered that would tell you not to switch, because you know you would have changed your mind if you did, even if you wonât gather it anymore. Your knowledge of the existence of this evidence is evidence that supports not switching, even if you donât know the specifics. It seems like you shouldnât ignore that. Maybe it doesnât go all the way to support committing to sticking with your current expertise, because you can favour the more specific evidence you actually have, but maybe you should update hard enough on it?
This seems like it could avoid both the ex ante and ex post regret so far. But, still you either:
canât be an EU maximizer, and so youâll be vulnerable to money pump arguments anyway or abandon completeness and often be silent on what to do (e.g. multi-utility representations), or
have to unjustifiably fix a single scale and prior over it ahead of time.
The same could apply to humans vs aliens. Even if weâre not behind the veil of ignorance now and never were, thereâs information that weâd be ignoring: what real or hypothetical aliens would believe and the real or hypothetical existence of evidence that supports their stance.
But, itâs also really weird to consider the stances of hypothetical aliens. Itâs also weird in a different way if you imagine finding out what itâs like to be a chicken and suffer like a chicken.
Suppose youâre justifiably sure that each intervention is at least not net negative (whether or not you have a single scale and prior). But then you find out bednets have no (or tiny) impact. I think it would be reasonable to switch to deworming at some cost. Deworming could be less effective than you thought ahead of time, but no impact is as bad as it gets given your credences ahead of time.