I don’t think that the apples and oranges case is analogous, since then it’s really about different preferences. In this case I’m assuming that all the parties have the same ultimate preferences (to make more good morally relevant good experiences and fewer bad ones), but different pieces of evidence.
I do think the deworming and bednets case is analogous. Suppose the two of us are in a room before we go out to gather evidence. We agree that there is a 50% chance that bednets are twice as good as deworming, and a 50% chance that deworming is twice as good. We neither of us have a great idea how good either of them is.
One of us goes off to study bednets. After that they have a reasonable sense of how good bednets are, and predictably prefer deworming (for 2-envelope reasons). The other goes to study deworming, and afterwards predictably prefers bednets. At this point we each have an expertise which makes our work 10% more effective on the thing we’re expert in, but we each choose to eschew our expertise as the benefit from switching envelopes is higher.
We’d like to morally trade so that we each stay working in our domain of expertise. But suppose that later we’ll be causally disconnected and unable to engage in moral trade. We’d still like to commit at the start to a trade where neither party switches.
Now suppose that there’s only you, and you’re about to flip a coin to decide if you’ll go to study bednets or deworming. You’d prefer to commit to not then switching to the other thing.
But suppose you forgot to make that commitment, and are only thinking about this after having flipped the coin and discovered you’re about to study bednets. Your epistemic position hasn’t yet changed, only your expectation of future evidence. Surely(?) you’d still want to make the commitment at this point?
Now if you only think about it later, having studied bednets, I’m imagining that you think “well I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so I’m glad I didn’t commit”. Is that right? (I prefer to act as though I’d made the commitment I predictably would have wanted to make.)
Now suppose that there’s only you, and you’re about to flip a coin to decide if you’ll go to study bednets or deworming. You’d prefer to commit to not then switching to the other thing.
Maybe? I’m not sure I’d want to constrain my future self this way, if it won’t seem best/rational later. I don’t very strongly object to commitments in principle, and it seems like the right thing to do in some cases, like Parfit’s hitchhiker. However, those assume the same preferences/scale after, and in the two envelopes problem, we may not be able to assume that. It could look more like preference change.
In this case, it looks like you’re committing to something you will predictably later regret either way it goes (because you’ll want to switch), which seems kind of irrational. It looks like violating the sure-thing principle. Plus, either way it goes, it looks like you’ll fail to follow your own preferences later, and it will seem irrational then. Russell and Isaacs (2021) and Gustafsson (2022) also argue similarly against resolute choice strategies.
I’m more sympathetic to acausal trade with other beings that could simultaneously exist with you (even if you don’t know ahead of time whether you’ll find bednets or deworming better in expectation), if and because you’ll expect the world to be better off for it at every step: ahead of time, just before you follow through and after you follow through. There’s no expected regret. In an infinite multiverse (or a non-negligible chance of one), we should expect such counterparts to exist, though, so plausibly should do the acausal trade.
Also, I think you’d want to commit ahead of time to a more flexible policy for switching that depends on the specific evidence you’ll gather.[1]
Now if you only think about it later, having studied bednets, I’m imagining that you think “well I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so I’m glad I didn’t commit”. Is that right? (I prefer to act as though I’d made the commitment I predictably would have wanted to make.)
Ya, that seems mostly right on first intuition.
However, acausal trade with counterparts in a multiverse still seems kind of compelling.
Also, I see some other appeal in favour of committing ahead of time to stick with whatever you study (and generally making the commitment earlier, too, contra what I say above in this comment): you know there’s evidence you could have gathered that would tell you not to switch, because you know you would have changed your mind if you did, even if you won’t gather it anymore. Your knowledge of the existence of this evidence is evidence that supports not switching, even if you don’t know the specifics. It seems like you shouldn’t ignore that. Maybe it doesn’t go all the way to support committing to sticking with your current expertise, because you can favour the more specific evidence you actually have, but maybe you should update hard enough on it?
This seems like it could avoid both the ex ante and ex post regret so far. But, still you either:
can’t be an EU maximizer, and so you’ll be vulnerable to money pump arguments anyway or abandon completeness and often be silent on what to do (e.g. multi-utility representations), or
have to unjustifiably fix a single scale and prior over it ahead of time.
The same could apply to humans vs aliens. Even if we’re not behind the veil of ignorance now and never were, there’s information that we’d be ignoring: what real or hypothetical aliens would believe and the real or hypothetical existence of evidence that supports their stance.
But, it’s also really weird to consider the stances of hypothetical aliens. It’s also weird in a different way if you imagine finding out what it’s like to be a chicken and suffer like a chicken.
Suppose you’re justifiably sure that each intervention is at least not net negative (whether or not you have a single scale and prior). But then you find out bednets have no (or tiny) impact. I think it would be reasonable to switch to deworming at some cost. Deworming could be less effective than you thought ahead of time, but no impact is as bad as it gets given your credences ahead of time.
I don’t think that the apples and oranges case is analogous, since then it’s really about different preferences. In this case I’m assuming that all the parties have the same ultimate preferences (to make more good morally relevant good experiences and fewer bad ones), but different pieces of evidence.
I do think the deworming and bednets case is analogous. Suppose the two of us are in a room before we go out to gather evidence. We agree that there is a 50% chance that bednets are twice as good as deworming, and a 50% chance that deworming is twice as good. We neither of us have a great idea how good either of them is.
One of us goes off to study bednets. After that they have a reasonable sense of how good bednets are, and predictably prefer deworming (for 2-envelope reasons). The other goes to study deworming, and afterwards predictably prefers bednets. At this point we each have an expertise which makes our work 10% more effective on the thing we’re expert in, but we each choose to eschew our expertise as the benefit from switching envelopes is higher.
We’d like to morally trade so that we each stay working in our domain of expertise. But suppose that later we’ll be causally disconnected and unable to engage in moral trade. We’d still like to commit at the start to a trade where neither party switches.
Now suppose that there’s only you, and you’re about to flip a coin to decide if you’ll go to study bednets or deworming. You’d prefer to commit to not then switching to the other thing.
But suppose you forgot to make that commitment, and are only thinking about this after having flipped the coin and discovered you’re about to study bednets. Your epistemic position hasn’t yet changed, only your expectation of future evidence. Surely(?) you’d still want to make the commitment at this point?
Now if you only think about it later, having studied bednets, I’m imagining that you think “well I would have wanted to commit earlier, but now that I know about how good bednets are I think deworming is better in expectation, so I’m glad I didn’t commit”. Is that right? (I prefer to act as though I’d made the commitment I predictably would have wanted to make.)
Maybe? I’m not sure I’d want to constrain my future self this way, if it won’t seem best/rational later. I don’t very strongly object to commitments in principle, and it seems like the right thing to do in some cases, like Parfit’s hitchhiker. However, those assume the same preferences/scale after, and in the two envelopes problem, we may not be able to assume that. It could look more like preference change.
In this case, it looks like you’re committing to something you will predictably later regret either way it goes (because you’ll want to switch), which seems kind of irrational. It looks like violating the sure-thing principle. Plus, either way it goes, it looks like you’ll fail to follow your own preferences later, and it will seem irrational then. Russell and Isaacs (2021) and Gustafsson (2022) also argue similarly against resolute choice strategies.
I’m more sympathetic to acausal trade with other beings that could simultaneously exist with you (even if you don’t know ahead of time whether you’ll find bednets or deworming better in expectation), if and because you’ll expect the world to be better off for it at every step: ahead of time, just before you follow through and after you follow through. There’s no expected regret. In an infinite multiverse (or a non-negligible chance of one), we should expect such counterparts to exist, though, so plausibly should do the acausal trade.
Also, I think you’d want to commit ahead of time to a more flexible policy for switching that depends on the specific evidence you’ll gather.[1]
Ya, that seems mostly right on first intuition.
However, acausal trade with counterparts in a multiverse still seems kind of compelling.
Also, I see some other appeal in favour of committing ahead of time to stick with whatever you study (and generally making the commitment earlier, too, contra what I say above in this comment): you know there’s evidence you could have gathered that would tell you not to switch, because you know you would have changed your mind if you did, even if you won’t gather it anymore. Your knowledge of the existence of this evidence is evidence that supports not switching, even if you don’t know the specifics. It seems like you shouldn’t ignore that. Maybe it doesn’t go all the way to support committing to sticking with your current expertise, because you can favour the more specific evidence you actually have, but maybe you should update hard enough on it?
This seems like it could avoid both the ex ante and ex post regret so far. But, still you either:
can’t be an EU maximizer, and so you’ll be vulnerable to money pump arguments anyway or abandon completeness and often be silent on what to do (e.g. multi-utility representations), or
have to unjustifiably fix a single scale and prior over it ahead of time.
The same could apply to humans vs aliens. Even if we’re not behind the veil of ignorance now and never were, there’s information that we’d be ignoring: what real or hypothetical aliens would believe and the real or hypothetical existence of evidence that supports their stance.
But, it’s also really weird to consider the stances of hypothetical aliens. It’s also weird in a different way if you imagine finding out what it’s like to be a chicken and suffer like a chicken.
Suppose you’re justifiably sure that each intervention is at least not net negative (whether or not you have a single scale and prior). But then you find out bednets have no (or tiny) impact. I think it would be reasonable to switch to deworming at some cost. Deworming could be less effective than you thought ahead of time, but no impact is as bad as it gets given your credences ahead of time.