Is Our Universe A Newcomb’s Paradox Simulation?

Epistemic status: half-baked midnight thoughts after watching Black Mirror.

Summary:

It seems almost impossibly unlikely that we are alive at the time of history in which we exist, right on the precipice of creating an intergalactic civilization.

It seems even more unlikely that we (longtermist EAs) just happen to be the first ones to recognize and take this seriously.

If we think this indicates we are likely in a simulation, it might make sense to pursue hedonism. If we think we are not in a simulation, it would make sense to pursue longtermism.

Our situation seems reminiscent of Newcomb’s Paradox to me. Maybe someone in the far future is running a simulation to test out decision theories, and we are that simulation.

Detail:

Maybe we are in a simulation that is testing whether we will recognize this situation, and upon recognizing we are likely in a simulation choose to hedonistically pursue maximum pleasure for ourselves, or if instead we will go to the trouble of altruistically spending our time and energy to try to make the future go well, just in case we are actually in 1st layer, non-simulated reality, even though this seems absurdly improbable.

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

But if we choose indulgent hedonism, we sacrifice the entire future, we enjoy the moment, but we also probably weren’t actually in a simulation, because other beings like ourselves would likely realize this as well and so would also choose hedonism, and so longtermism would be doomed to always implode, and no simulations would be created.

Of course this doesn’t take into account the factor that you may actually enjoy altruism and the longtermism mission, making it less of a sacrifice. But it seems like a wild convergence cognitive bias to assume that what we are doing to maxim altruism also just happens to be maximizing hedonism as much as possible.

One resolution, however, may be that maximizing meaning ends up being the best way to maximize happiness, and in the future the universe is tiled with hedonium, which happens to be a simulation of the most important and therefore most meaningful century, the one we live in. If this analysis is right, then it might make sense that pursuing longtermism actually does converge with hedonism (enlightened self-interest).

The way I presented the problem also fails to account for the fact that it seems quite possible there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

The thought experiment also breaks somewhat if you assume there is a significant probability a future civilization can’t or won’t create a large number of diverse simulations of this type for some systemic and unavoidable reason. This skews in favor of longtermism.

I guess the moral of the story is that perhaps we should hedge our altruistic bets by aiming to be as happy as possible at the same time as being longtermists. I don’t think this is too controversial since happiness actually seems to improve productivity.

Would appreciate any feedback on the decision theory element of this. Is one choice (between hedonism and longtermism) evidential and one causal? I couldn figure that part out. Not sure it is directly analogous to 1-box and 2-box of Newcomb’s Paradox.