Could you say a little more to translate the objections to option 3 into this case, and why you think they’re crazy?
Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down? Personally, my intuition that Emily should definitely stand down is probably not grounded in option 3 being compelling. I, like most people, am really intensely risk averse about harming children, and in your example it seems like you could maybe stand down Emily but still find another way to kill the terrorist (as happens in every movie about things like this).
I’m pretty sympathetic to option 3 but it doesn’t feel emotionally satisfying here. It kind of feels like the stakes of Emily’s shoulder pain just can’t matter enough for me to have an new attitude toward the situation, given the stakes of the stuff I’m clueless about. It feels like a bad reason to act. I think a reasonable response is that its still the best reason I’ve got, but I’m at least sympathetic to feeling like it’s unsatisfying.
Well, let me literally take Anthony’s first objection and replace the words to make it apply to the Emily case:
There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that an AMF donation saves lives giving Emily the order to stand down makes her better off, and I’m clueless about its long-term effects overall (of this order, due to cluelessness about which of the terrorist and the child will be shot). Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives (thanks to the terrorist being shot and not the kid), which the donationmy order makes less likely via potentially increasing x-risk preventing the terrorists (and luckily not the kid) from being shot, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.
When I wrote the comment you responded to, it just felt to me like only the former decomposition was warranted in this case. But, since then, I’m not sure anymore. It surely feels more “natural”, but that’s not an argument...
Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down?
The former, although I might ofc be lying to myself.
Spent some more time thinking about this, and I think I mostly lost my intuition in favor of bracketing in Emily’s shoulder pain. I thought I’d share here.
The problem
In my contrived sniper setup, I’ve gotta do something, and my preferred normative view (impartial consequentialism + good epistemic principles + maximality) is silent. Options I feel like I have:
A) Bracket out the kid, but not the terrorist (-> shooting is better)
B) Bracket out the terrorist, but not the kid (-> no shooting is better)
C) Bracket out both the kid and the terrorist, but not Emily[1] (-> no shooting is better)
All these options feel arbitrary, but I have to pick something.
Comparing poisons
Picking D demands accepting the arbitrariness of letting perfect randomness guide our actions. We can’t do worse than this.[2] It is the total-arbitrariness baseline we’re trying to beat.
Picking A or B demands accepting the arbitrariness of favoring one over the other, while my setup does not give me any good reason to do so (and A and B give opposite recommendations). I could pick A by sorta wagering on, e.g., an unlikely world where the kid dies of Reye’s syndrome (a disease that affects almost only children) before the potential bullet hits anything. But I could then also pick B by sorta wagering on the unlikely world where a comrade of the terrorist standing near him turns on him and kills him. And I don’t see either of these two wager moves as more warranted than the other.[3]
Picking C, similarly, demands accepting the arbitrariness of favoring it over A (which gives the opposite recommendation), while my setup does not give me any good reason to do so. I could pick C by wagering on, e.g., an unlikely world where time ends between the potential shot hurting Emily’s shoulder and the moment the potential bullet hits something. But I could then also pick A by wagering on the unlikely world where the kid dies of Reye’s syndrome anyway. And same pb as above.[4] And this is what Anthony’s first objection to bracketing gestures at, I guess.
While I have a strong anti-D intuition with this sniper setup, it doesn’t favor C over A or B for me, at the very moment of writing.[5]
Should we think that our reasons for C are “more grounded” than our reasons for A, or something like that? I don’t see why. Is there a variant of this sniper story where it seems easier to argue that it is the case (while conserving the complex cluelessness assumption)? And is such a variant a relevant analogy to our real-world predicament?
Without necessarily assumingpersons-based bracketing (for A, B, or C), but rather whatever form of bracketing results in ignoring the payoffs associated with one or two of the three relevant actors.
Our judgment calls can very well be worse than random due to systematic biases (and I remember reading somewhere in the forecasting literature that this happens). But if we believe that’s our case, we can just do the exact opposite of what our judgment calls say and this beats a coin flip.
This is despite some apparent kind of symmetry existing only between A and B (not between C and A) that @Nicolas Mace recently pointed to in some doc comment—symmetry which may feel normatively relevant although it feels superficial to me at the very moment of writing.
In fact, given the apparent stakes difference between Emily’s shoulder pain and where the bullet ends, I may be more tempted to act in accordance to A or B, deciding between the two based on what seems to be the least arbitrary tie-breaker. However, not sure whether this temptation is, more precisely, one in favor of endorsing A or B, or in favor of rejecting cluelessness and the need for bracketing to begin with, or something else.
I think there’s a difference between (A,B) and C: On the A’s or B’s in-bracket, we can’t say that one option is strictly better than the other.
(Conditional on missing the terrorist / the child, A / B is indifferent between shooting and not shooting).
There’s also a difference between (B,C) and A: On B and C, we’re clueless on the out-bracket (conditional on hitting the terrorist, shooting is strictly better, and conditional on not hitting the terrorist it’s strictly worse). On A on the other hand we’re clueful on the out-bracket (it’s never strictly better for child+Emily to shoot).
I’m pretty unsure what to make of this. (I might also have misinterpreted the case). I think (1) is a point against A- and B-bracketings being action-guiding. (2) might be a reason to rule out A-bracketing. So considering A, B and C as candidate bracketings, I might go with C’s verdict.
I was implicitly assuming the probability of hitting the kid or the terrorist is high enough that where the bullet ends strictly matters more than Emily’s pain. If I misunderstood you and this doesn’t address your point, we could also assume that Emily only might have shoulder pain if she takes the shot. Then the difference you point to disappears, right? (And this changes nothing to the thought experiment, assuming risk neutrality and stuff.)
This also makes this second difference disappear, right? On B and C, we’re actually clueful on the out-bracket (the terrorist dwarfs Emily, so it’s better to shoot in expectation). So it’s symmetric to cluefulness on the out-bracket on A.
we could also assume that Emily only might have shoulder pain if she takes the shot
Yeah, if we’re clueless whether Emily will feel pain or not then the difference disappears. In this case I don’t have the pro-not-shooting bracketing intuition.
On B and C, we’re actually clueful on the out-bracket (the terrorist dwarfs Emily, so it’s better to shoot in expectation)
I was thinking on C we’re clueless on the out-bracket, because, conditional on shooting, we might (a) hit the child (bad for everyone except Emily), (b) nothing (neutral for everyone except Emily) or (c) the terrorist (good for everyone except Emily), and we’re clueless whether (a), (b) or (c) is the case. I might misunderstand something, tho.
if we’re clueless whether Emily will feel pain or not then the difference disappears. In this case I don’t have the pro-not-shooting bracketing intuition.
Should this difference matter if we’re not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:
A’) Bracket in the bad long-term effects (-> don’t intervene)
B’) Bracket in the good long-term effects (-> intervene)
C’) Bracket in the near-term effects (-> intervene)
Do you have the pro-C’ intuition, then? If yes, what’s different from the sniper case?
I suspect we need to involve our criteria for defining and picking bracketings here.
In practice, I think it doesn’t make sense to just bracket in the bad long-term effects or just bracket in the good ones. You might be able to carve out bracketings that include only bad (or only good) long-term effects and effects outweighed by them, but not all bad (or all good) long-term effects. But that will depend on the particulars.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation. I’m not entirely sure where the time cutoff should start in practice, but it would be related to AGI’s arrival. That could make us neartermist.
But we may also want to bracket out possibilities, not just ST locations. Maybe we can bracket out AGI by date X, for various X (or the min probability of it across choices, in case we affect its probability), and focus on non-AGI outcomes we may be more clueful about. If we bracket out the right set of possibilities, maybe some longtermist interventions will look best.
Could you say a little more to translate the objections to option 3 into this case, and why you think they’re crazy?
Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down? Personally, my intuition that Emily should definitely stand down is probably not grounded in option 3 being compelling. I, like most people, am really intensely risk averse about harming children, and in your example it seems like you could maybe stand down Emily but still find another way to kill the terrorist (as happens in every movie about things like this).
I’m pretty sympathetic to option 3 but it doesn’t feel emotionally satisfying here. It kind of feels like the stakes of Emily’s shoulder pain just can’t matter enough for me to have an new attitude toward the situation, given the stakes of the stuff I’m clueless about. It feels like a bad reason to act. I think a reasonable response is that its still the best reason I’ve got, but I’m at least sympathetic to feeling like it’s unsatisfying.
Interesting.
Well, let me literally take Anthony’s first objection and replace the words to make it apply to the Emily case:
When I wrote the comment you responded to, it just felt to me like only the former decomposition was warranted in this case. But, since then, I’m not sure anymore. It surely feels more “natural”, but that’s not an argument...
The former, although I might ofc be lying to myself.
Thanks, that’s helpful. I agree that the former feels more natural but am not sure where that comes from.
Spent some more time thinking about this, and I think I mostly lost my intuition in favor of bracketing in Emily’s shoulder pain. I thought I’d share here.
The problem
In my contrived sniper setup, I’ve gotta do something, and my preferred normative view (impartial consequentialism + good epistemic principles + maximality) is silent. Options I feel like I have:
A) Bracket out the kid, but not the terrorist (-> shooting is better)
B) Bracket out the terrorist, but not the kid (-> no shooting is better)
C) Bracket out both the kid and the terrorist, but not Emily[1] (-> no shooting is better)
D) Flip a coin, whatever. This illustrates radical cluelessness.
All these options feel arbitrary, but I have to pick something.
Comparing poisons
Picking D demands accepting the arbitrariness of letting perfect randomness guide our actions. We can’t do worse than this.[2] It is the total-arbitrariness baseline we’re trying to beat.
Picking A or B demands accepting the arbitrariness of favoring one over the other, while my setup does not give me any good reason to do so (and A and B give opposite recommendations). I could pick A by sorta wagering on, e.g., an unlikely world where the kid dies of Reye’s syndrome (a disease that affects almost only children) before the potential bullet hits anything. But I could then also pick B by sorta wagering on the unlikely world where a comrade of the terrorist standing near him turns on him and kills him. And I don’t see either of these two wager moves as more warranted than the other.[3]
Picking C, similarly, demands accepting the arbitrariness of favoring it over A (which gives the opposite recommendation), while my setup does not give me any good reason to do so. I could pick C by wagering on, e.g., an unlikely world where time ends between the potential shot hurting Emily’s shoulder and the moment the potential bullet hits something. But I could then also pick A by wagering on the unlikely world where the kid dies of Reye’s syndrome anyway. And same pb as above.[4] And this is what Anthony’s first objection to bracketing gestures at, I guess.
While I have a strong anti-D intuition with this sniper setup, it doesn’t favor C over A or B for me, at the very moment of writing.[5]
Should we think that our reasons for C are “more grounded” than our reasons for A, or something like that? I don’t see why. Is there a variant of this sniper story where it seems easier to argue that it is the case (while conserving the complex cluelessness assumption)? And is such a variant a relevant analogy to our real-world predicament?
Without necessarily assuming persons-based bracketing (for A, B, or C), but rather whatever form of bracketing results in ignoring the payoffs associated with one or two of the three relevant actors.
Our judgment calls can very well be worse than random due to systematic biases (and I remember reading somewhere in the forecasting literature that this happens). But if we believe that’s our case, we can just do the exact opposite of what our judgment calls say and this beats a coin flip.
It feels like I’m just adding non-decisive mildly sweet considerations on top of the complex cluelessness pile I already had (after thinking about the different wind layers, the Earth’s rotation, etc). This will not allow me to single out one of these considerations as a tie-breaker.
This is despite some apparent kind of symmetry existing only between A and B (not between C and A) that @Nicolas Mace recently pointed to in some doc comment—symmetry which may feel normatively relevant although it feels superficial to me at the very moment of writing.
In fact, given the apparent stakes difference between Emily’s shoulder pain and where the bullet ends, I may be more tempted to act in accordance to A or B, deciding between the two based on what seems to be the least arbitrary tie-breaker. However, not sure whether this temptation is, more precisely, one in favor of endorsing A or B, or in favor of rejecting cluelessness and the need for bracketing to begin with, or something else.
I think there’s a difference between (A,B) and C: On the A’s or B’s in-bracket, we can’t say that one option is strictly better than the other.
(Conditional on missing the terrorist / the child, A / B is indifferent between shooting and not shooting).
There’s also a difference between (B,C) and A: On B and C, we’re clueless on the out-bracket (conditional on hitting the terrorist, shooting is strictly better, and conditional on not hitting the terrorist it’s strictly worse). On A on the other hand we’re clueful on the out-bracket (it’s never strictly better for child+Emily to shoot).
I’m pretty unsure what to make of this. (I might also have misinterpreted the case). I think (1) is a point against A- and B-bracketings being action-guiding. (2) might be a reason to rule out A-bracketing. So considering A, B and C as candidate bracketings, I might go with C’s verdict.
I was implicitly assuming the probability of hitting the kid or the terrorist is high enough that where the bullet ends strictly matters more than Emily’s pain. If I misunderstood you and this doesn’t address your point, we could also assume that Emily only might have shoulder pain if she takes the shot. Then the difference you point to disappears, right? (And this changes nothing to the thought experiment, assuming risk neutrality and stuff.)
This also makes this second difference disappear, right? On B and C, we’re actually clueful on the out-bracket (the terrorist dwarfs Emily, so it’s better to shoot in expectation). So it’s symmetric to cluefulness on the out-bracket on A.
Yeah, if we’re clueless whether Emily will feel pain or not then the difference disappears. In this case I don’t have the pro-not-shooting bracketing intuition.
I was thinking on C we’re clueless on the out-bracket, because, conditional on shooting, we might (a) hit the child (bad for everyone except Emily), (b) nothing (neutral for everyone except Emily) or (c) the terrorist (good for everyone except Emily), and we’re clueless whether (a), (b) or (c) is the case. I might misunderstand something, tho.
Should this difference matter if we’re not difference-making risk-averse or something? In both cases, C is better for Emily in expectation (the same way reducing potential termite suffering is better for termites, in expectation, even if it might make no difference because they might not be sentient).
Now, new thought experiment. Consider whatever intervention you find robustly overall good in the near-term (without bracketing out any near-term effect) and replace A, B, and C with the following:
A’) Bracket in the bad long-term effects (-> don’t intervene)
B’) Bracket in the good long-term effects (-> intervene)
C’) Bracket in the near-term effects (-> intervene)
Do you have the pro-C’ intuition, then? If yes, what’s different from the sniper case?
I suspect we need to involve our criteria for defining and picking bracketings here.
In practice, I think it doesn’t make sense to just bracket in the bad long-term effects or just bracket in the good ones. You might be able to carve out bracketings that include only bad (or only good) long-term effects and effects outweighed by them, but not all bad (or all good) long-term effects. But that will depend on the particulars.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation. I’m not entirely sure where the time cutoff should start in practice, but it would be related to AGI’s arrival. That could make us neartermist.
But we may also want to bracket out possibilities, not just ST locations. Maybe we can bracket out AGI by date X, for various X (or the min probability of it across choices, in case we affect its probability), and focus on non-AGI outcomes we may be more clueful about. If we bracket out the right set of possibilities, maybe some longtermist interventions will look best.