The possibility of us living in a short-lived simulation isnāt enough to count much against longtermism, because itās also possible we could live in a long-lived simulation or a long-lived world, and those possibilities will be much higher stakes, so still dominate expected value calculations unless we assign them tiny probability together.
I think the argument crucially depends on the assumption that simulations will be disproportionately short-lived, and we have acausal influence over agents in other simulations. If for each long-running world (simulated or otherwise) with moral agents and moral patients, there are N short-lived worlds with (moral) agents and moral patients, and our actions are correlated with those of agents across worlds, then we get to decide for more agents in in short-lived worlds than long-lived ones. Basically, acausal influence will boost the expected value of all interventions, but if moral patients are disproportionately in short-lived simulations with agents whose decisions weāre correlated with relative to long-run simulations with agents whose decisions weāre correlated with (or more skewed towards the short-lived than it seems for our own world), acausal influence will disproportionately boost the expected value of neartermist interventions relative to longtermist ones.
Also, ~all of the expected value will be acausal if we fully count the value of acausal influence, based on the evidentialistās wager and similar, given the possibility of very large or even infinite numbers of agents with whom weāre correlated.
I think the argument crucially depends on the assumption that simulations will be disproportionately short-lived
Yes, the argument depends on Brianās parameter F not being super small. F is āfraction of all computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets (both the most intelligent and also less intelligent creatures on those planets)ā. āA non-solipsish simulation is one in which most or all of the people and animals who seem to exist on Earth are actually being simulated to a non-trivial level of detailā. Brian guessed F = 10^-6, but it feels like it should be much smaller to me. If the value of the future is e.g. 10^30 times the value of this century, it is maybe reasonable to assume that the vast vast majority of computational sent-years are also simulations of the far future, as opposed to simulations of almost-space-colonizing ancestral planets.
The possibility of us living in a short-lived simulation isnāt enough to count much against longtermism, because itās also possible we could live in a long-lived simulation or a long-lived world, and those possibilities will be much higher stakes, so still dominate expected value calculations unless we assign them tiny probability together.
I think the argument crucially depends on the assumption that simulations will be disproportionately short-lived, and we have acausal influence over agents in other simulations. If for each long-running world (simulated or otherwise) with moral agents and moral patients, there are N short-lived worlds with (moral) agents and moral patients, and our actions are correlated with those of agents across worlds, then we get to decide for more agents in in short-lived worlds than long-lived ones. Basically, acausal influence will boost the expected value of all interventions, but if moral patients are disproportionately in short-lived simulations with agents whose decisions weāre correlated with relative to long-run simulations with agents whose decisions weāre correlated with (or more skewed towards the short-lived than it seems for our own world), acausal influence will disproportionately boost the expected value of neartermist interventions relative to longtermist ones.
Also, ~all of the expected value will be acausal if we fully count the value of acausal influence, based on the evidentialistās wager and similar, given the possibility of very large or even infinite numbers of agents with whom weāre correlated.
Thanks for clarifying, Michael!
Yes, the argument depends on Brianās parameter F not being super small. F is āfraction of all computational sent-years spent non-solipsishly simulating almost-space-colonizing ancestral planets (both the most intelligent and also less intelligent creatures on those planets)ā. āA non-solipsish simulation is one in which most or all of the people and animals who seem to exist on Earth are actually being simulated to a non-trivial level of detailā. Brian guessed F = 10^-6, but it feels like it should be much smaller to me. If the value of the future is e.g. 10^30 times the value of this century, it is maybe reasonable to assume that the vast vast majority of computational sent-years are also simulations of the far future, as opposed to simulations of almost-space-colonizing ancestral planets.