I’ve seen similar discussions of EDT vs. CDT and most of the associated thought experiments elsewhere, but the emphasis here seems to be much more about whether you can actually have a causal impact on the past. You’ll have to forgive me if you address this set of points+objections somewhere and I just missed it (it’s a long post!), but my thought process is:
Most (if not all) of the situations you describe seem to assume away important beliefs about what is physically/epistemically possible in terms of predictive accuracy. This seems to contribute to a lot of the surprise/confusion effect.
CDT does seem flawed if it is stubbornly not willing to acknowledge how your decision could correlate with or provide evidence about the past, whereas EDT does seem to do that well. But I also don’t know how justified it is to blame CDT for not handling generally-unrealistic assumptions: it might be a more effective heuristic in many other (realistic) situations.
Perhaps most specific to your post, I really am skeptical of describing these situations as “controlling the past.” Rather, it seems that—as is typically the case—the past is controlling you and/or your actions are just correlated. When I think of it that way and combine the point of (1), I don’t find most of these situations that surprising or insightful.
I will say, one thought experiment I didn’t see mentioned after briefly ctrl+f’ing for it is the idea of “are we in a civilizational simulation” (not just “is Omega simulating me?”). That’s one area where I actually think this might be pretty interesting to think about: if we end up creating reality simulations, does that provide evidence about whether we are currently in a simulation?
“the emphasis here seems to be much more about whether you can actually have a causal impact on the past”—I definitely didn’t mean to imply that you could have a causal impact on the past. The key point is that the type of control in question is acausal.
I agree that many of these cases involve unrealistic assumptions, and that CDT may well be an effective heuristic most of the time (indeed, I expect that it is).
I don’t feel especially hung up on calling it “control”—ultimately it’s the decision theory (e.g., rejecting CDT) that I’m interested in. I like the word “control,” though, because I think there is a very real sense in which you get to choose what your copy writes on his whiteboard, and that this is pretty weird; and because, more broadly, one of the main objections to non-CDT decision theories is that it feels like they are trying to “control” the past in some sense (and I’m saying: this is OK).
Simulation stuff does seem like it could be one in principle application here, e.g.: “if we create civilizations simulations, then this makes it more likely that others whose actions are correlated with ours create simulations, in which case we’re more likely to be in a simulation, so because we don’t want to be in a simulation, this is a reason to not create simulations.” But it seems there are various empirical assumptions about the correlations at stake here, and I haven’t thought about cases like this much (and simulation stuff gets gnarly fast, even without bringing weird decision-theory in).
I’ve seen similar discussions of EDT vs. CDT and most of the associated thought experiments elsewhere, but the emphasis here seems to be much more about whether you can actually have a causal impact on the past. You’ll have to forgive me if you address this set of points+objections somewhere and I just missed it (it’s a long post!), but my thought process is:
Most (if not all) of the situations you describe seem to assume away important beliefs about what is physically/epistemically possible in terms of predictive accuracy. This seems to contribute to a lot of the surprise/confusion effect.
CDT does seem flawed if it is stubbornly not willing to acknowledge how your decision could correlate with or provide evidence about the past, whereas EDT does seem to do that well. But I also don’t know how justified it is to blame CDT for not handling generally-unrealistic assumptions: it might be a more effective heuristic in many other (realistic) situations.
Perhaps most specific to your post, I really am skeptical of describing these situations as “controlling the past.” Rather, it seems that—as is typically the case—the past is controlling you and/or your actions are just correlated. When I think of it that way and combine the point of (1), I don’t find most of these situations that surprising or insightful.
I will say, one thought experiment I didn’t see mentioned after briefly ctrl+f’ing for it is the idea of “are we in a civilizational simulation” (not just “is Omega simulating me?”). That’s one area where I actually think this might be pretty interesting to think about: if we end up creating reality simulations, does that provide evidence about whether we are currently in a simulation?
“the emphasis here seems to be much more about whether you can actually have a causal impact on the past”—I definitely didn’t mean to imply that you could have a causal impact on the past. The key point is that the type of control in question is acausal.
I agree that many of these cases involve unrealistic assumptions, and that CDT may well be an effective heuristic most of the time (indeed, I expect that it is).
I don’t feel especially hung up on calling it “control”—ultimately it’s the decision theory (e.g., rejecting CDT) that I’m interested in. I like the word “control,” though, because I think there is a very real sense in which you get to choose what your copy writes on his whiteboard, and that this is pretty weird; and because, more broadly, one of the main objections to non-CDT decision theories is that it feels like they are trying to “control” the past in some sense (and I’m saying: this is OK).
Simulation stuff does seem like it could be one in principle application here, e.g.: “if we create civilizations simulations, then this makes it more likely that others whose actions are correlated with ours create simulations, in which case we’re more likely to be in a simulation, so because we don’t want to be in a simulation, this is a reason to not create simulations.” But it seems there are various empirical assumptions about the correlations at stake here, and I haven’t thought about cases like this much (and simulation stuff gets gnarly fast, even without bringing weird decision-theory in).