[Question] If physics is many-worlds, does ethics matter?

Cross-posted on LessWrong.

Sorta re­lated, but not the same thing: Prob­lems and Solu­tions in In­finite Ethics

I don’t know a lot about physics, but there ap­pears to be a live de­bate in the field about how to in­ter­pret quan­tum phe­nom­ena.

There’s the Copen­hagen view, un­der which wave func­tions col­lapse into a de­ter­mined state, and the many-wor­lds view, un­der which wave func­tions split off into differ­ent “wor­lds” as time moves for­ward. I’m pretty sure I’m miss­ing im­por­tant nu­ance here; this ex­plainer (a) does a bet­ter job ex­plain­ing the differ­ence.

(Wikipe­dia tells me there are other in­ter­pre­ta­tions apart from Copen­hagen and many-wor­lds – e.g. De Broglie–Bohm the­ory – but from what I can tell the ac­tive de­bate is be­tween many-wor­lders and Cophen­hagenists.)

Eliezer Yud­kowsky is in the many-wor­lds camp. My guess is that many folks in the EA & ra­tio­nal­ity com­mu­ni­ties also hold a many-wor­lds view, though I haven’t seen data on that.

An in­ter­est­ing (trou­bling?) im­pli­ca­tion of many-wor­lds is that there are many very-similar ver­sions of me. For ev­ery de­ci­sion I’ve made, there’s a ver­sion where the other choice was made.

And im­por­tantly, these al­ter­nate ver­sions are just as real as me.

(I find this a bit mind-bend­ing to think about; I again re­fer to this ex­plainer (a) which does a bet­ter job than I can.)

If this is true, it seems hard to ground al­tru­is­tic ac­tions in a non-self­ish foun­da­tion. Every­thing that could hap­pen is hap­pen­ing, some­where. I might de­sire to ex­ist in the cor­ner of the mul­ti­verse where good things are hap­pen­ing, but that’s a self-in­ter­ested mo­ti­va­tion. There are still other cor­ners, where the other pos­si­bil­ities are play­ing out.

Eliezer en­gages with this a bit at the end of his quan­tum se­quence:

Are there hor­rible wor­lds out there, which are ut­terly be­yond your abil­ity to af­fect? Sure. And hor­rible things hap­pened dur­ing the twelfth cen­tury, which are also be­yond your abil­ity to af­fect. But the twelfth cen­tury is not your re­spon­si­bil­ity, be­cause it has, as the quaint phrase goes, “already hap­pened.” I would sug­gest that you con­sider ev­ery world that is not in your fu­ture to be part of the “gen­er­al­ized past.”
Live in your own world. Be­fore you knew about quan­tum physics, you would not have been tempted to try liv­ing in a world that did not seem to ex­ist. Your de­ci­sions should add up to this same nor­mal­ity: you shouldn’t try to live in a quan­tum world you can’t com­mu­ni­cate with.

I find this a lit­tle deflat­ing, and in­con­gru­ous with his in­tense call-to-ac­tions to save the world. Sure, we can work to save the world, but un­der many-wor­lds, we’re re­ally just work­ing to save our cor­ner of it.

Has any­one ar­rived at a more satis­fy­ing rec­on­cili­a­tion of this? Maybe the thing to do here is bite the bul­let of ground­ing one’s ethics in self-in­ter­ested de­sire, but that doesn’t seem to be a pop­u­lar move in EA.