Hmm, maybe I’m missing something, but I feel like unless you’re comparing classical longtermist interventions with simulation escape interventions, or the margins are thin enough that longtermist interventions are within a direct order of magnitude to the effectiveness of neartermist interventions (under a longtermist axiology), you should act as if we aren’t in a simulation?
I’m also a bit confused about whether probability “we” live in a simulation is more a claim about the material world or a claim about anthropics, never fully resolved this philosophical detail to my satisfaction.
I totally agree that under reasonable assumptions we should act as if we aren’t in a simulation. I just meant that random weird stuff [like maybe the simulation argument] can mess with our no-catastrophe probability. But regardless of what the simulation risk is, decreasing climate risk by 0.1% increases our no-catastrophe probability by about 0.1%. Insofar as it’s undesirable that one’s answer to your question depends on one’s views on simulation and other stuff we should practically disregard, we should really be asking a question that’s robust to such stuff, like cost-of-increasing-no-catastrophe-probability-relative-to-the-status-quo-baseline rather than all-things-considered-basis-points. Put another way, imagine that by default an evil demon destroys civilizations with 99% probability 100 years after they discover fission. Then increasing our survival probability by a basis point is worth much more than in the no-demon world, even though we should act equivalently. So your question is not directly decision-relevant.
Not sure what your “a claim about the material world or a claim about anthropics” distinction means; my instinct is that eg “we are not simulated” is an empirical proposition and the reasons we have to assign a certain probability to that proposition are related to anthropics.
Not sure what your “a claim about the material world or a claim about anthropics” distinction means; my instinct is that eg “we are not simulated” is an empirical proposition and the reasons we have to assign a certain probability to that proposition are related to anthropics.
If I know with P~=1 certainty that there are 1000 observers-like-me and 999 of them are in a simulation (or Boltzmann brains, etc), then there’s at least two reasonable interpretations of probability.
The algorithm that initiates me has at least one representation with ~100% certainty outside the simulation, therefore the “I” that matters is not in a simulation, P~=1.
Materially, for the vast majority of observers like me, they are in a simulation. P~=0.1% that I happen to be the instance that’s outside the simulation. P~=0.001
Put another way, the philosophical question here is whether P(we’re in a simulation) should most naturally be understood as “there exists a copy of me outside of simulation” vs “of the copies of me that exists, how many of them are in a simulation” is the relevant empirical operationalization.
Hmm, maybe I’m missing something, but I feel like unless you’re comparing classical longtermist interventions with simulation escape interventions, or the margins are thin enough that longtermist interventions are within a direct order of magnitude to the effectiveness of neartermist interventions (under a longtermist axiology), you should act as if we aren’t in a simulation?
I’m also a bit confused about whether probability “we” live in a simulation is more a claim about the material world or a claim about anthropics, never fully resolved this philosophical detail to my satisfaction.
I totally agree that under reasonable assumptions we should act as if we aren’t in a simulation. I just meant that random weird stuff [like maybe the simulation argument] can mess with our no-catastrophe probability. But regardless of what the simulation risk is, decreasing climate risk by 0.1% increases our no-catastrophe probability by about 0.1%. Insofar as it’s undesirable that one’s answer to your question depends on one’s views on simulation and other stuff we should practically disregard, we should really be asking a question that’s robust to such stuff, like cost-of-increasing-no-catastrophe-probability-relative-to-the-status-quo-baseline rather than all-things-considered-basis-points. Put another way, imagine that by default an evil demon destroys civilizations with 99% probability 100 years after they discover fission. Then increasing our survival probability by a basis point is worth much more than in the no-demon world, even though we should act equivalently. So your question is not directly decision-relevant.
Not sure what your “a claim about the material world or a claim about anthropics” distinction means; my instinct is that eg “we are not simulated” is an empirical proposition and the reasons we have to assign a certain probability to that proposition are related to anthropics.
If I know with P~=1 certainty that there are 1000 observers-like-me and 999 of them are in a simulation (or Boltzmann brains, etc), then there’s at least two reasonable interpretations of probability.
The algorithm that initiates me has at least one representation with ~100% certainty outside the simulation, therefore the “I” that matters is not in a simulation, P~=1.
Materially, for the vast majority of observers like me, they are in a simulation. P~=0.1% that I happen to be the instance that’s outside the simulation. P~=0.001
Put another way, the philosophical question here is whether P(we’re in a simulation) should most naturally be understood as “there exists a copy of me outside of simulation” vs “of the copies of me that exists, how many of them are in a simulation” is the relevant empirical operationalization.