Cross-posted on LessWrong.
Sorta related, but not the same thing: Problems and Solutions in Infinite Ethics
I don’t know a lot about physics, but there appears to be a live debate in the field about how to interpret quantum phenomena.
There’s the Copenhagen view, under which wave functions collapse into a determined state, and the many-worlds view, under which wave functions split off into different “worlds” as time moves forward. I’m pretty sure I’m missing important nuance here; this explainer (a) does a better job explaining the difference.
(Wikipedia tells me there are other interpretations apart from Copenhagen and many-worlds – e.g. De Broglie–Bohm theory – but from what I can tell the active debate is between many-worlders and Cophenhagenists.)
Eliezer Yudkowsky is in the many-worlds camp. My guess is that many folks in the EA & rationality communities also hold a many-worlds view, though I haven’t seen data on that.
An interesting (troubling?) implication of many-worlds is that there are many very-similar versions of me. For every decision I’ve made, there’s a version where the other choice was made.
And importantly, these alternate versions are just as real as me.
If this is true, it seems hard to ground altruistic actions in a non-selfish foundation. Everything that could happen is happening, somewhere. I might desire to exist in the corner of the multiverse where good things are happening, but that’s a self-interested motivation. There are still other corners, where the other possibilities are playing out.
Eliezer engages with this a bit at the end of his quantum sequence:
Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the twelfth century, which are also beyond your ability to affect. But the twelfth century is not your responsibility, because it has, as the quaint phrase goes, “already happened.” I would suggest that you consider every world that is not in your future to be part of the “generalized past.”
Live in your own world. Before you knew about quantum physics, you would not have been tempted to try living in a world that did not seem to exist. Your decisions should add up to this same normality: you shouldn’t try to live in a quantum world you can’t communicate with.
I find this a little deflating, and incongruous with his intense call-to-actions to save the world. Sure, we can work to save the world, but under many-worlds, we’re really just working to save our corner of it.
Has anyone arrived at a more satisfying reconciliation of this? Maybe the thing to do here is bite the bullet of grounding one’s ethics in self-interested desire, but that doesn’t seem to be a popular move in EA.