I suspect we need to involve our criteria for defining and picking bracketings here.
In practice, I think it doesnât make sense to just bracket in the bad long-term effects or just bracket in the good ones. You might be able to carve out bracketings that include only bad (or only good) long-term effects and effects outweighed by them, but not all bad (or all good) long-term effects. But that will depend on the particulars.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation. Iâm not entirely sure where the time cutoff should start in practice, but it would be related to AGIâs arrival. That could make us neartermist.
But we may also want to bracket out possibilities, not just ST locations. Maybe we can bracket out AGI by date X, for various X (or the min probability of it across choices, in case we affect its probability), and focus on non-AGI outcomes we may be more clueful about. If we bracket out the right set of possibilities, maybe some longtermist interventions will look best.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation.
Oh helpful thanks, this reasoning also works in my sniper case, actually. I am clueful about the âwhere Emily is right after she potentially shootsâ ST location so I canât bracket out the payoff attached to her shoulder pain. This payoff is contained within this small ST region. However, the payoffs associated with where the bullet ends arenât neatly contained in small ST regions the same way! I want the terrorist dead because heâs gonna keep terrorizing some parts of the world otherwise. I want the kid alive to prevent the negative consequences (in various ST regions) associated with an innocent kidâs death. Because of this, I arguably canât pin down any specific ST location other than âwhere Emily is right after she potentially shootsâ that is made determinately better or worse off by Emily taking the shot. Hence, ST bracketing would allow C but not A or B.
To the extent that Iâm still skeptical of C being warranted, it is because:
1) I find it weird that finding action-guidance depends on my inability to pin down any specific ST location other than âwhere Emily is right after she potentially shootsâ that is made determinately better or worse off. Say I had a crystal ball randomly showing me a prison cell in Argentina that, for some reason, is empty if Emily shoots and filled with starving people if she doesnât. ST bracketing would now tell me shooting is better⌠It feels wrong to decide based on isolated ST regions in which I happen to know what happens depending on whether Emily shoots. There are plenty of other ST regions that would be made better or worse off. I just canât say where/âwhen they are. And whether or not I can say this feels like it shouldnât matter.[1]
2) Iâm confused as to why we should bracket based on ST regions rather than on some other defensible value-bearers that may give a conflicting result.
And I guess all this also applies to Aâ vs Bâ vs Câ and whether to bracket out near-term effects. Thanks for helping me identify these cruxes!
Iâll take some more time to think about your point about bracketing out possibilities and AGI by date X.
And thatâs one way to interpret Anthonyâs first objection to bracketing? I canât actually pin down a specific ST location (or whatever value-bearer) where donating to AMF is determinately bad, but I still know for sure such locations exist! As I think you alluded to elsewhere while discussing ST bracketing and changes to agriculture/âland use, what stops us from acting as if we could pin down such locations?
I suspect we need to involve our criteria for defining and picking bracketings here.
In practice, I think it doesnât make sense to just bracket in the bad long-term effects or just bracket in the good ones. You might be able to carve out bracketings that include only bad (or only good) long-term effects and effects outweighed by them, but not all bad (or all good) long-term effects. But that will depend on the particulars.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation. Iâm not entirely sure where the time cutoff should start in practice, but it would be related to AGIâs arrival. That could make us neartermist.
But we may also want to bracket out possibilities, not just ST locations. Maybe we can bracket out AGI by date X, for various X (or the min probability of it across choices, in case we affect its probability), and focus on non-AGI outcomes we may be more clueful about. If we bracket out the right set of possibilities, maybe some longtermist interventions will look best.
Oh helpful thanks, this reasoning also works in my sniper case, actually. I am clueful about the âwhere Emily is right after she potentially shootsâ ST location so I canât bracket out the payoff attached to her shoulder pain. This payoff is contained within this small ST region. However, the payoffs associated with where the bullet ends arenât neatly contained in small ST regions the same way! I want the terrorist dead because heâs gonna keep terrorizing some parts of the world otherwise. I want the kid alive to prevent the negative consequences (in various ST regions) associated with an innocent kidâs death. Because of this, I arguably canât pin down any specific ST location other than âwhere Emily is right after she potentially shootsâ that is made determinately better or worse off by Emily taking the shot. Hence, ST bracketing would allow C but not A or B.
To the extent that Iâm still skeptical of C being warranted, it is because:
1) I find it weird that finding action-guidance depends on my inability to pin down any specific ST location other than âwhere Emily is right after she potentially shootsâ that is made determinately better or worse off. Say I had a crystal ball randomly showing me a prison cell in Argentina that, for some reason, is empty if Emily shoots and filled with starving people if she doesnât. ST bracketing would now tell me shooting is better⌠It feels wrong to decide based on isolated ST regions in which I happen to know what happens depending on whether Emily shoots. There are plenty of other ST regions that would be made better or worse off. I just canât say where/âwhen they are. And whether or not I can say this feels like it shouldnât matter.[1]
2) Iâm confused as to why we should bracket based on ST regions rather than on some other defensible value-bearers that may give a conflicting result.
And I guess all this also applies to Aâ vs Bâ vs Câ and whether to bracket out near-term effects. Thanks for helping me identify these cruxes!
Iâll take some more time to think about your point about bracketing out possibilities and AGI by date X.
And thatâs one way to interpret Anthonyâs first objection to bracketing? I canât actually pin down a specific ST location (or whatever value-bearer) where donating to AMF is determinately bad, but I still know for sure such locations exist! As I think you alluded to elsewhere while discussing ST bracketing and changes to agriculture/âland use, what stops us from acting as if we could pin down such locations?