Multiverse-wide cooperation in a nutshell

(Cross­posted from the FRI blog.)

This is a post I wrote about Cas­par Oester­held’s long pa­per Mul­ti­verse-wide co­op­er­a­tion via cor­re­lated de­ci­sion-mak­ing. Be­cause I have found the idea tricky to ex­plain – which un­for­tu­nately makes it difficult to get feed­back from oth­ers on whether the think­ing be­hind it makes sense – I de­cided to write a shorter sum­mary. While I am hop­ing that my text can serve as a stan­dalone piece, for ad­di­tional in­tro­duc­tory con­tent I also recom­mend read­ing the be­gin­ning of Cas­par’s pa­per, or watch­ing the short video in­tro­duc­tion here (re­quires ba­sic knowl­edge of the “CDT, EDT or some­thing else” de­bate in de­ci­sion the­ory).

0. Ele­va­tor pitch

(Dis­claimer: Espe­cially for the ele­va­tor pitch sec­tion here, I am sac­ri­fic­ing ac­cu­racy and pre­ci­sion for brevity. Refer­ences can be found in Cas­par’s pa­per.)

It would be an un­canny co­in­ci­dence if the ob­serv­able uni­verse made up ev­ery­thing that ex­ists. The rea­son we can­not find any ev­i­dence for there be­ing stuff be­yond the edges of our uni­verse is not be­cause it is likely that there is noth­ing­ness, but be­cause pho­tons from fur­ther away sim­ply would not have had suffi­cient time af­ter the big bang to reach us. This means that the uni­verse we find our­selves in may well be vastly larger than what we can ob­serve, in fact even in­finitely larger. The the­ory of in­fla­tion­ary cos­mol­ogy in ad­di­tion hints at the ex­is­tence of other uni­verse bub­bles with differ­ent fun­da­men­tal con­stants form­ing or dis­ap­pear­ing un­der cer­tain con­di­tions, some­how co-ex­ist­ing with our uni­verse in par­allel. The um­brella term mul­ti­verse cap­tures the idea that the ob­serv­able uni­verse is just a tiny por­tion of ev­ery­thing that ex­ists. The mul­ti­verse may con­tain myr­i­ads of wor­lds like ours, in­clud­ing other wor­lds with in­tel­li­gent life and civ­i­liza­tion. An in­finite mul­ti­verse (of one sort or an­other) is ac­tu­ally amongst the most pop­u­lar cos­molog­i­cal hy­pothe­ses, ar­guably even fa­vored by the ma­jor­ity of ex­perts.

Many eth­i­cal the­o­ries (in par­tic­u­lar most ver­sions of con­se­quen­tial­ism) do not con­sider ge­o­graph­i­cal dis­tance of rele­vance to moral value. After all, suffer­ing and the frus­tra­tion of one’s prefer­ences is bad for some­one re­gard­less of where (or when) it hap­pens. This prin­ci­ple should ap­ply even when we con­sider wor­lds so far away from us that we can never re­ceive any in­for­ma­tion from there. Mo­ral con­cern over what hap­pens el­se­where in the mul­ti­verse is one re­quire­ment for the idea I am now go­ing to dis­cuss.

Mul­ti­verse-wide co­op­er­a­tion via su­per­ra­tional­ity (ab­bre­vi­a­tion: MSR) is the idea that, if I think about differ­ent value sys­tems and their re­spec­tive pri­ori­ties in the world, I should not work on the high­est pri­or­ity ac­cord­ing to my own val­ues, but on what­ever my com­par­a­tive ad­van­tage is amongst all the in­ter­ven­tions fa­vored by the value sys­tems of agents in­ter­ested in mul­ti­verse-wide co­op­er­a­tion. (Another route to gains from trade is to fo­cus on con­ver­gent in­ter­ests, pur­su­ing in­ter­ven­tions that may not be the top pri­or­ity for any par­tic­u­lar value sys­tem, but are valuable from a max­i­mally broad range of per­spec­tives.) For sim­plic­ity rea­sons, I will re­fer to this as sim­ply “co­op­er­at­ing” from now on.

A de­ci­sion to co­op­er­ate, ac­cord­ing to some views on de­ci­sion the­ory, gives me ra­tio­nal rea­son to be­lieve that agents in similar de­ci­sion situ­a­tions el­se­where in the mul­ti­verse, es­pe­cially the ones who are most similar to my­self in how they rea­son about de­ci­sion prob­lems, are likely to co­op­er­ate as well. After all, if two very similar rea­son­ers think about the same de­ci­sion prob­lem, they are likely to reach iden­ti­cal an­swers. This sug­gests that they will end up ei­ther both co­op­er­at­ing, or both defect­ing. As­sum­ing that the way agents find de­ci­sions is not strongly con­strained or oth­er­wise af­fected by their val­ues, we can ex­pect there to be agents with differ­ent val­ues who rea­son about de­ci­sion prob­lems the same way we do, who come to iden­ti­cal con­clu­sions. Co­op­er­a­tion then pro­duces gains from trade be­tween value sys­tems.

While each party would want to be the sole defec­tor, the mechanism be­hind mul­ti­verse-wide co­op­er­a­tion – namely that we have to think of our­selves as be­ing cou­pled with those agents in the mul­ti­verse who are most similar to us in their rea­son­ing – en­sures that defec­tion is dis­in­cen­tivized: Any party that defects would now have to ex­pect that their highly similar coun­ter­parts would also defect.

The clos­est way to ap­prox­i­mate the value sys­tems of agents in other parts of the mul­ti­verse, given our ig­no­rance about how the mul­ti­verse looks like, is to as­sume that sub­stan­tial parts of it at least are go­ing to be similar to how things are here, where we can study them. A min­i­mally vi­able ver­sion of mul­ti­verse-wide co­op­er­a­tion can there­fore be thought of as all-out “or­di­nary” co­op­er­a­tion with value sys­tems we know well (and es­pe­cially ones that in­clude pro­po­nents sym­pa­thetic to MSR rea­son­ing). This sug­gests that, while MSR com­bines spec­u­la­tive-sound­ing ideas such as non-stan­dard cau­sa­tion and the ex­is­tence of a mul­ti­verse, its im­pli­ca­tions may not be all that strange and largely boil down to the pro­posal that we should be “max­i­mally” co­op­er­a­tive to­wards other value sys­tems.

1. A primer on non-causal de­ci­sion theory

Leav­ing aside for the mo­ment the whole part about the mul­ti­verse, MSR is fun­da­men­tally about co­op­er­at­ing in a pris­oner’s-dilemma-like situ­a­tion with agents who are very similar to our­selves in the way they rea­son about de­ci­sion prob­lems. Dou­glas Hofs­tadter coined the term su­per­ra­tional­ity for the idea that one should co­op­er­ate in a pris­oner’s dilemma if one ex­pects the other party to fol­low the same style of rea­son­ing. If they rea­son the same way I do, and the prob­lem they are fac­ing is the same kind of prob­lem I am fac­ing, then I must ex­pect that they will likely come to the same con­clu­sion I will come to. This sug­gests that the pris­oner’s dilemma in ques­tion is un­likely to end with an asym­met­ric out­come ((co­op­er­ate I defect) or (defect I co­op­er­ate)), but likely to end with a sym­met­ric out­come ((co­op­er­ate I co­op­er­ate) or (defect I defect)). Be­cause (co­op­er­ate I co­op­er­ate) is the best out­come for both par­ties amongst the sym­met­ric out­comes, su­per­ra­tional­ity sug­gests one is best served by co­op­er­at­ing.

At this point, read­ers may be skep­ti­cal whether this rea­son­ing works. There seems to be some kind of shady ac­tion at a dis­tance in­volved, where my choice to co­op­er­ate is some­how sup­posed to af­fect the other party’s choice, even though we are as­sum­ing that no in­for­ma­tion about my de­ci­sion reaches said other party. But we can think of it this way: If rea­son­ers are de­ter­minis­tic sys­tems, and two rea­son­ers fol­low the ex­act same de­ci­sion al­gorithm in a highly similar de­ci­sion situ­a­tion, it at some point be­comes log­i­cally con­tra­dic­tory to as­sume that the two rea­son­ers will end up with di­a­met­ri­cally op­posed con­clu­sions.

Side note: By de­ci­sion situ­a­tions hav­ing to be “highly similar,” I do not mean that the situ­a­tions agents find them­selves in have to be par­tic­u­larly similar with re­spect to lit­tle de­tails in the back­ground. What I mean is that they should be highly similar in terms of all de­ci­sion-rele­vant vari­ables, the vari­ables that are likely to make a differ­ence to an agent’s de­ci­sion. If we imag­ine a sim­plified de­ci­sion situ­a­tion where agents have to choose be­tween two op­tions, ei­ther press a but­ton or not (and then some­thing hap­pens or not), it prob­a­bly mat­ters lit­tle whether one agent has the choice to press a red but­ton and an­other agent is faced with press­ing a blue but­ton. As long as both but­tons do the same thing, and as long as the agents are not (emo­tion­ally or oth­er­wise) af­fected by the color differ­ences, we can safely as­sume that the color of the but­ton is highly un­likely to play a de­ci­sion-rele­vant role. What is more likely rele­vant are things such as the pay­offs (value ac­cord­ing what an agent cares about) the agents ex­pect from the available op­tions. If one agent be­lieves they stand to re­ceive pos­i­tive util­ity from press­ing the but­ton, and the other stands to re­ceive nega­tive util­ity, then that is guaran­teed to make a rele­vant differ­ence as to whether the agents will want to press their but­tons. Maybe the pay­off differ­en­tials are also rele­vant some­times, or are at least prob­a­bil­is­ti­cally rele­vant with some prob­a­bil­ity: If one agent only gains a tiny bit of util­ity, whereas the other agent has an enor­mous amount of util­ity to win, the lat­ter agent might be much more mo­ti­vated to avoid tak­ing a sub­op­ti­mal de­ci­sion. While pay­offs and pay­off struc­tures cer­tainly mat­ter, it is un­likely that it mat­ters what qual­ifies as a pay­off for a given agent: If an agent who hap­pens to re­ally like ap­ples will be re­warded with tasty ap­ples af­ter press­ing a but­ton, and an­other agent who re­ally likes money is re­warded with money, their de­ci­sion situ­a­tions seem the same pro­vided that they each care equally strongly about re­ceiv­ing the de­sired re­ward. (This is the in­tu­ition be­hind the ir­rele­vance of spe­cific value sys­tems for whether two de­ci­sion al­gorithms or de­ci­sion situ­a­tions are rele­vantly similar or not. Whether one prefers ap­ples, money, car­rots or what­ever, math is still math and de­ci­sion the­ory is still de­ci­sion the­ory.)

A differ­ent ob­jec­tion that read­ers may have at this point con­cerns the idea of su­per­ra­tionally “fix­ing” other agents’ de­ci­sions. Namely, crit­ics may point out that we are thereby only ever talk­ing about up­dat­ing our own mod­els, our pre­dic­tion of what hap­pens el­se­where, and that this does not ac­tu­ally change what was go­ing to hap­pen el­se­where. While this sounds like an ac­cu­rate ob­ser­va­tion, the force of the state­ment rests on a loaded defi­ni­tion of “ac­tu­ally chang­ing things el­se­where” (or any­where for that mat­ter). If we ap­plied the same rigor to a straight­for­ward in­stance of causally or di­rectly chang­ing the po­si­tion of a light switch in our room, a critic may in the same vain ob­ject that we only changed our ex­pec­ta­tion of what was go­ing to hap­pen, not what ac­tu­ally was go­ing to hap­pen. The uni­verse is lawful: noth­ing ever hap­pens that was not go­ing to hap­pen. What we do when we want to have an im­pact and ac­com­plish some­thing with our ac­tions is never to ac­tu­ally change what was go­ing to hap­pen; in­stead, it is to act in the way that best shifts our pre­dic­tions fa­vor­ably to­wards our goals. (This is not to be con­fused with cheat­ing at pre­dic­tion: We don’t want to make our­selves op­ti­mistic for no good rea­son, be­cause the de­ci­sion to bias one­self to­wards op­ti­mism does not ac­tu­ally cor­re­late with our goals get­ting ac­com­plished – it only cor­re­lates with a de­luded fu­ture self be­liev­ing that we will be ac­com­plish­ing our goals.)

For more read­ing on this topic, I recom­mend this pa­per on func­tional de­ci­sion the­ory, the book Ev­i­dence, De­ci­sion and Causal­ity or the ar­ti­cle On Cor­re­la­tion and Cau­sa­tion Part 1: Ev­i­den­tial de­ci­sion the­ory is cor­rect. For an overview on differ­ent de­ci­sion the­o­ries, see also this sum­mary. To keep things sim­ple and as un­con­tro­ver­sial as pos­si­ble, I will fol­low Cas­par’s ter­minol­ogy for the rest of my post here and use the term su­per­ra­tional­ity in a very broad sense that is in­de­pen­dent of any spe­cific fla­vor of de­ci­sion the­ory, refer­ring to a fuzzy cat­e­gory of ar­gu­ments from similar­ity of de­ci­sion al­gorithms that fa­vor co­op­er­at­ing in cer­tain pris­oner’s-dilemma-like situ­a­tions.

2. A mul­ti­verse en­sures the ex­is­tence of agents with de­ci­sion al­gorithms ex­tremely similar to ours

The ex­is­tence of a mul­ti­verse would vir­tu­ally guaran­tee that there are many agents out there who fulfill the crite­ria of “rele­vant similar­ity” com­pared to us with re­gard to their de­ci­sion al­gorithm and de­ci­sion situ­a­tions – what­ever these crite­ria may boil down to in de­tail.

Side note: Tech­ni­cally, if the mul­ti­verse is in­deed in­finite, there will likely be in­finitely many such agents, and in­finite amounts of ev­ery­thing in gen­eral, which ad­mit­tedly poses some se­ri­ous difficul­ties for for­mal­iz­ing de­ci­sions: If there is already an in­finite amount of value or dis­value, it seems like all our ac­tions should be ranked the same in terms of the value of the out­come they re­sult in. This leads to so-called in­fini­tar­ian paral­y­sis, where all ac­tions are rated as equally good or bad. Per­haps in­fini­tar­ian paral­y­sis is a strong coun­ter­ar­gu­ment to MSR. But in that case, we should be con­sis­tent: In­fini­tar­ian paral­y­sis would then also be a strong coun­ter­ar­gu­ment to ag­grega­tive con­se­quen­tial­ism in gen­eral. Be­cause it af­fects nearly ev­ery­thing (for con­se­quen­tial­ists), and be­cause of how dras­tic its im­pli­ca­tions would be if there was no con­ve­nient solu­tion, I am ba­si­cally hop­ing that some­one will find a solu­tion that makes ev­ery­thing work again in the face of in­fini­ties. For this rea­son, I think we should not think of MSR as be­ing par­tic­u­larly in dan­ger of failing for rea­sons of in­fini­tar­ian paral­y­sis.

Back to ob­ject-level MSR: We noted that the mul­ti­verse guaran­tees that there are agents out there very similar to us who are likely to tackle de­ci­sion prob­lems the same way we do. To pre­vent con­fu­sion, note that MSR is not based on the naive as­sump­tion that all hu­mans who find the con­cept of su­per­ra­tional­ity con­vinc­ing are there­fore strongly cor­re­lated with each other across all pos­si­ble de­ci­sion situ­a­tions. Su­per­ra­tional­ity only mo­ti­vates co­op­er­a­tion if one has good rea­son to be­lieve that an­other party’s de­ci­sion al­gorithm is in­deed ex­tremely similar to one’s own. Hu­man rea­son­ing pro­cesses differ in many ways, and sym­pa­thy to­wards su­per­ra­tional­ity rep­re­sents only one small di­men­sion of one’s rea­son­ing pro­cess. It may very well be ex­tremely rare that two peo­ple’s rea­son­ing is suffi­ciently similar that, hav­ing com­mon knowl­edge of this similar­ity, they should ra­tio­nally co­op­er­ate in a pris­oner’s dilemma.

But out there some­where, maybe on Earth already in a few in­stances among our eight-or-so billion in­hab­itants, but cer­tainly some­where in the mul­ti­verse if a mul­ti­verse in­deed ex­ists, there must be evolved in­tel­li­gent be­ings who are sym­pa­thetic to­wards su­per­ra­tional­ity in the same way we are, who in ad­di­tion also share a whole bunch of other struc­tural similar­i­ties with us in the way they rea­son about de­ci­sion prob­lems. Th­ese agents would con­strue de­ci­sion prob­lems re­lated to co­op­er­at­ing with other value sys­tems in the same way we do, and pay at­ten­tion to the same fac­tors weighted ac­cord­ing to the same de­ci­sion-nor­ma­tive crite­ria. When these agents think about MSR, they would be rea­son­ably likely to reach similar con­clu­sions with re­gard to the idea’s prac­ti­cal im­pli­ca­tions. Th­ese are our po­ten­tial co­op­er­a­tion part­ners.

I have to ad­mit that it seems very difficult to tell which as­pects of one’s rea­son­ing are more or less im­por­tant for the kind of de­ci­sion-rele­vant similar­ity we are look­ing for. There are many things left to be figured out, and it is far from clear whether MSR works at all in the sense of hav­ing ac­tion-guid­ing im­pli­ca­tions for how we should pur­sue our goals. But the un­der­ly­ing idea here is that once we pile up enough similar­i­ties of the rele­vant kind in one’s rea­son­ing pro­cesses (and a mul­ti­verse would en­sure that there are agents out there who do in­deed fulfill these crite­ria), at some point it be­comes log­i­cally con­tra­dic­tory to treat the out­put of our de­ci­sions as in­de­pen­dent from the de­ci­sional out­puts of these other agents. This in­sight seems hard to avoid, and it seems quite plau­si­ble that it has im­pli­ca­tions for our ac­tions.

If I were to de­cide to co­op­er­ate in the sense im­plied by MSR, I would have to then up­date my model of what is likely to hap­pen in other parts of the mul­ti­verse where de­ci­sion al­gorithms highly similar to my own are at play. Su­per­ra­tional­ity says that this up­date in my model, as­sum­ing it is pos­i­tive for my goal achieve­ment be­cause I now pre­dict more agents to be co­op­er­a­tive to­wards other value sys­tems (in­clud­ing my own), in it­self gives me rea­son to go ahead and act co­op­er­a­tively. If we man­age to form even a crude model of some of the likely goals of these other agents and how we can benefit them in our own part of the mul­ti­verse, then co­op­er­a­tion can already get off the ground and we might be able to reap gains from trade.

Alter­na­tively, if we de­cided against be­com­ing more co­op­er­a­tive, we learn that we must be suffer­ing costs from mu­tual defec­tion.This in­cludes both op­por­tu­nity costs and di­rect costs from cases where other par­ties’ fa­vored in­ter­ven­tions may hurt our val­ues.

3. We are play­ing a mul­ti­verse-wide pris­oner’s dilemma against (close) copies of our de­ci­sion algorithm

We are as­sum­ing that we care about what hap­pens in other parts of the mul­ti­verse. For in­stance, we might care about in­creas­ing to­tal hap­piness. If we fur­ther as­sume that de­ci­sion al­gorithms and the val­ues/​goals of agents are dis­tributed or­thog­o­nally – mean­ing that one can­not in­fer some­one’s val­ues sim­ply by see­ing how they rea­son prac­ti­cally about epistemic mat­ters – then we ar­rive at the con­cep­tu­al­iza­tion of a mul­ti­verse-wide pris­oner’s dilemma.

(Note that we can already ob­serve em­piri­cally that effec­tive al­tru­ists who share the same val­ues some­times dis­agree strongly about de­ci­sion the­ory (or more gen­er­ally rea­son­ing styles/​epistemics), and effec­tive al­tru­ists who agree on de­ci­sion the­ory some­times dis­agree strongly about val­ues. In ad­di­tion, as pointed out in sec­tion one, there ap­pears to be no log­i­cal rea­son as to why agents with differ­ent val­ues would nec­es­sar­ily have differ­ent de­ci­sion al­gorithms.)

The co­op­er­a­tive ac­tion in our pris­oner’s dilemma would now be to take other value sys­tems into ac­count in pro­por­tion to how preva­lent they are in the mul­ti­verse-wide com­pro­mise. We would thus try to benefit them when­ever we en­counter op­por­tu­ni­ties to do so effi­ciently, that is, when­ever we find our­selves with a com­par­a­tive ad­van­tage to strongly benefit a par­tic­u­lar value sys­tem. By con­trast, the ac­tion that cor­re­sponds to defect­ing in the pris­oner’s dilemma would be to pur­sue one’s per­sonal val­ues with zero re­gard for other value sys­tems. The pay­off struc­ture is such that an out­come where ev­ery­one co­op­er­ates is bet­ter for ev­ery­one than an out­come where ev­ery­one defects, but each party would pre­fer to be a sole defec­tor.

Con­sider for ex­am­ple some­one who is in an in­fluen­tial po­si­tion to give ad­vice to oth­ers. This per­son can ei­ther tai­lor their ad­vice to their own spe­cific val­ues, dis­cour­ag­ing oth­ers from work­ing on things that are unim­por­tant ac­cord­ing to their per­sonal value sys­tem, or she can give ad­vice that is tai­lored to­wards pro­duc­ing an out­come that is max­i­mally pos­i­tive for the value sys­tems of all su­per­ra­tional­ists, per­haps even in­vest­ing sub­stan­tial effort re­search­ing the im­pli­ca­tions of value sys­tems differ­ent from their own. MSR pro­vides a strong ar­gu­ment for max­i­mally co­op­er­a­tive be­hav­ior, be­cause by co­op­er­at­ing, the per­son in ques­tion en­sures that there is more such co­op­er­a­tion in other parts of the mul­ti­verse, which in ex­pec­ta­tion also strongly benefits their own val­ues.

Of course there are many other rea­sons to be nice to other value sys­tems (in par­tic­u­lar rea­sons that do not in­volve aliens and in­finite wor­lds). What is spe­cial about MSR is mostly that it gives an ar­gu­ment for tak­ing the value sys­tems of other su­per­ra­tional­ists into ac­count max­i­mally and with­out wor­ries of get­ting ex­ploited for be­ing too forth­com­ing. With MSR, mu­tual co­op­er­a­tion is achieved by treat­ing one’s own de­ci­sion as a simu­la­tion/​pre­dic­tion for agents rele­vantly similar to one­self. Beyond this, there is no need to guess the rea­son­ing of agents who are differ­ent. The up­dates one has to make based on MSR con­sid­er­a­tions are always sym­met­ri­cal for one’s own ac­tions and the ac­tions of other par­ties. This mechanism makes it im­pos­si­ble to en­ter asym­met­ri­cal (co­op­er­ate-defect or defect-co­op­er­ate) out­comes.

(Note that the way MSR works does not guaran­tee di­rect re­ciproc­ity in terms of who benefits whom: I should not choose to benefit value sys­tem X in my part of the mul­ti­verse in the hope that ad­vo­cates of value sys­tem X in par­tic­u­lar will, in re­verse, be nice to my val­ues here or in other parts of the mul­ti­verse. In­stead, I should sim­ply benefit whichever value sys­tem I can benefit most, in the ex­pec­ta­tion that whichever agents can benefit my val­ues the most – and pos­si­bly that turns out to be some­one with value sys­tem X – will ac­tu­ally co­op­er­ate and benefit my val­ues. To sum­ma­rize, hop­ing to be helped by value sys­tem X for MSR-rea­sons does not nec­es­sar­ily mean that I should help value sys­tem X my­self – it only im­plies that I should con­scien­tiously fol­low MSR and help who­ever benefits most from my re­sources.)

4. In­ter­lude for pre­vent­ing mi­s­un­der­stand­ings: Mul­ti­verse-wide co­op­er­a­tion is differ­ent from acausal trade!

Be­fore we can con­tinue with the main body of ex­pla­na­tion, I want to proac­tively point out that MSR is differ­ent from acausal trade, which has been dis­cussed in the con­text of ar­tifi­cial su­per­in­tel­li­gences rea­son­ing about each oth­ers’ de­ci­sion pro­ce­dures. There is a dan­ger that peo­ple lump the two ideas to­gether, be­cause MSR does share some similar­i­ties with acausal trade (and can ar­guably be seen as a spe­cial case of it).

Namely, both MSR and acausal trade are stan­dardly be­ing dis­cussed in a mul­ti­verse con­text and rely cru­cially on acausal de­ci­sion the­o­ries. There are, how­ever, sev­eral im­por­tant differ­ences: In the acausal trade sce­nario, two par­ties simu­late each other’s de­ci­sion pro­ce­dures to prove that one’s own co­op­er­a­tion en­sures co­op­er­a­tion in the other party. MSR, by con­trast, does not in­volve rea­son­ing about the de­ci­sion pro­ce­dures of par­ties differ­ent from one­self. In par­tic­u­lar, MSR does not in­volve rea­son­ing about whether a spe­cific party’s de­ci­sions have a log­i­cal con­nec­tion with one’s own de­ci­sions or not, i.e., whether the choices in a pris­oner’s-dilemma-like situ­a­tion can only re­sult in sym­met­ri­cal out­comes or not. MSR works through the sim­ple mechanism that one’s own de­ci­sion is as­sumed to already serve as the simu­la­tion/​pre­dic­tion for the refer­ence class of agents with rele­vantly similar de­ci­sion pro­ce­dures.

So MSR is based mostly on looser as­sump­tions than acausal trade, be­cause it does not re­quire hav­ing the tech­nolog­i­cal ca­pa­bil­ity to ac­cu­rately simu­late an­other party’s de­ci­sion al­gorithm. Although there is one as­pect in which MSR is based on stronger as­sump­tions than acausal trade. Namely, MSR is based on the as­sump­tion that one’s own de­ci­sion can func­tion as a pre­dic­tion/​simu­la­tion for not just iden­ti­cal copies of one­self in a bor­ing twin uni­verse where ev­ery­thing plays out ex­actly the same way as in our uni­verse, but also for an in­ter­est­ing spec­trum of similar-but-not-com­pletely-iden­ti­cal parts of the mul­ti­verse that in­clude agents who rea­son the same way about their de­ci­sions as we do, but may not share our goals. This is far from a triv­ial as­sump­tion, and I strongly recom­mend do­ing some fur­ther think­ing about this as­sump­tion. But if the as­sump­tion does go through, it has vast im­pli­ca­tions for not (just) the pos­si­bil­ity of su­per­in­tel­li­gences trad­ing with each other, but for a form of mul­ti­verse-wide co­op­er­a­tion that cur­rent-day hu­mans could already en­gage in.

5. MSR rep­re­sents a shift in one’s on­tol­ogy; it is not just some “trick” we can at­tempt for ex­tra credit

The line of rea­son­ing em­ployed in MSR is very similar to the rea­son­ing em­ployed in an­thropic de­ci­sion prob­lems. For com­par­i­son, take the idea that there are nu­mer­ous copies of our­selves across many an­ces­tor simu­la­tions. If we thought this was the case, rea­son­ing an­throp­i­cally as though we con­trol all our copies at once could, for cer­tain de­ci­sions, change our pri­ori­ti­za­tion: If my de­ci­sion to re­duce short-term suffer­ing plays out the same way in mil­lions of short-lived, simu­lated ver­sions of earth where fo­cus­ing on the far fu­ture is im­pos­si­ble to pay out, I have more rea­son to fo­cus on short-term suffer­ing than I thought.

MSR ap­plies a similar kind of rea­son­ing where we shift our think­ing from be­ing a sin­gle in­stance of some­thing to think­ing in terms of de­cid­ing for an en­tire class of agents. MSR is what fol­lows when one ex­tends/​gen­er­al­izes the an­thropic/​UDT slo­gan “Act­ing as though you are all your (sub­jec­tively iden­ti­cal) copies at once” to “Act­ing as though you are all copies of your (sub­jec­tive prob­a­bil­ity dis­tri­bu­tion over your) de­ci­sion al­gorithm at once.”

Rather than iden­ti­fy­ing solely with one’s sub­jec­tive ex­pe­riences and one’s goals/​val­ues, MSR also in­volves “iden­ti­fy­ing with” – on the level of pre­dict­ing con­se­quences rele­vant to one’s de­ci­sion – one’s gen­eral de­ci­sion al­gorithm. If the as­sump­tions be­hind MSR are sound, then de­cid­ing not to change one’s ac­tions based on MSR has to cause an up­date in one’s world model, an up­date about other agents in one’s refer­ence class also not co­op­er­at­ing. So the un­der­ly­ing rea­son­ing that mo­ti­vates MSR is some­thing that has to per­me­ate our think­ing about how to have an im­pact on the world, whether we de­cide to let it af­fect our de­ci­sions or not. MSR is a claim about what is ra­tio­nal to do given that our ac­tions have an im­pact in a broader sense than we may ini­tially think, span­ning across all in­stances of one’s de­ci­sion al­gorithm. It changes our EV calcu­la­tions and may in some in­stances even flip the sign – net pos­i­tive/​nega­tive – of cer­tain in­ter­ven­tions. Ig­nor­ing MSR is there­fore not nec­es­sar­ily the de­fault, “safe” op­tion.

6. Lack of knowl­edge about aliens is no ob­sta­cle be­cause a min­i­mally vi­able ver­sion of MSR can be based on what we ob­serve on earth

Once we start de­liber­at­ing whether to ac­count for the goals of other agents in the mul­ti­verse, we run into the prob­lem that we have a very poor idea of what the mul­ti­verse looks like. The mul­ti­verse may con­tain all kinds of strange things, in­clud­ing wor­lds where phys­i­cal con­stants are differ­ent from the ones in our uni­verse, or wor­lds where highly im­prob­a­ble things keep hap­pen­ing for the same rea­son that, if you keep throw­ing an in­finite num­ber of fair coins, some of them some­where will pro­duce un­canny se­quences like “always heads” or “always tails.”

Be­cause it seems difficult and in­tractable to en­vi­sion all the pos­si­ble land­scapes in differ­ent parts of the mul­ti­verse, what kind of agents we might find there, and how we can benefit the goals of these agents with our re­sources here, one might be tempted to dis­miss MSR for be­ing too im­prac­ti­cal a con­sid­er­a­tion. How­ever, I think this would be a pre­ma­ture dis­mis­sal. We may not know any­thing about strange cor­ners of the mul­ti­verse, but we know at the very least how things are in our ob­serv­able uni­verse. As long as we feel like we can­not say any­thing sub­stan­tial about how, speci­fi­cally, the parts of the mul­ti­verse that are com­pletely differ­ent from the things we know differ from our en­vi­ron­ment, then we may as well ig­nore these oth­ers parts. For prac­ti­cal pur­poses, we do not have to spec­u­late about parts of the mul­ti­verse that would be com­pletely alien to us (yay!), and can in­stead fo­cus on what we already know from di­rect ex­pe­rience. After all, our world is likely to be rep­re­sen­ta­tive for some other wor­lds in the mul­ti­verse. (This holds for the same rea­son that a ran­domly cho­sen tele­vi­sion chan­nel is more likely than not to be some­what rep­re­sen­ta­tive of some other tele­vi­sion chan­nels, rather than be­ing com­pletely un­like any other chan­nel.) There­fore, we can be rea­son­ably con­fi­dent that out there some­where, there are planets with an evolu­tion­ary his­tory that, al­though differ­ent from ours in some ways, also pro­duced in­tel­li­gent ob­servers who built a tech­nolog­i­cally ad­vanced civ­i­liza­tion. And while many of these civ­i­liza­tions may con­tain agents with value sys­tems we have never thought about, some of these civ­i­liza­tions will also con­tain earth-like value sys­tems.

It any­way seems plau­si­ble that our com­par­a­tive ad­van­tage lies in helping those value sys­tems about whom we can at­tain the most in­for­ma­tion. If we sur­vey the val­ues of peo­ple on earth, and per­haps also how much these val­ues cor­re­late with sym­pa­thies for the con­cept of su­per­ra­tional­ity and tak­ing weird ar­gu­ments to their log­i­cal con­clu­sion, this already gives us highly use­ful in­for­ma­tion about the val­ues of po­ten­tial co­op­er­a­tors in the mul­ti­verse. MSR then im­plies strong co­op­er­a­tion with value sys­tems that we already know (per­haps ad­justed by the de­gree their pro­po­nents are re­cep­tive to MSR ideas).

By “strong co­op­er­a­tion,” I mean that one should ideally pick in­ter­ven­tions based on con­sid­er­a­tions of per­sonal com­par­a­tive ad­van­tages: If there is a value sys­tem for which I could cre­ate an ex­traor­di­nary amount of (var­i­ance-ad­justed; see chap­ter 3 of this dis­ser­ta­tion for an in­tro­duc­tion) value given my tal­ents and po­si­tion in the world, I should per­haps ex­clu­sively fo­cus on benefit­ting speci­fi­cally that value sys­tem. Meta in­ter­ven­tions that are pos­i­tive for many value sys­tems at once also re­ceive a strong boost by MSR con­sid­er­a­tions and should plau­si­bly be pur­sued at high effort even in case they do not come out as the top pri­or­ity ab­sent MSR con­sid­er­a­tions. (Ex­am­ples for such in­ter­ven­tions are e.g. mak­ing sure that any su­per­in­tel­li­gent AIs that are built can co­op­er­ate with other AIs, or that peo­ple who are un­cer­tain about their val­ues should not waste time with philos­o­phy and in­stead try to benefit ex­ist­ing value sys­tems MSR-style.) Fi­nally, one should also look for more co­op­er­a­tive al­ter­na­tives when con­sid­er­ing in­ter­ven­tions that, al­though pos­i­tive for one’s own value sys­tem, may in ex­pec­ta­tion cause harm to other value sys­tems.

---

Re­lated an­nounce­ment 1: Cas­par Oester­held, who has thought about MSR much more than I have, will be giv­ing a talk on the topic at EAG Lon­don. Feel free to ap­proach him dur­ing the event to dis­cuss any­thing re­lated to the idea.

Re­lated an­nounce­ment 2: My col­league David Althaus has done some prepara­tory work for a so­phis­ti­cated sur­vey on the moral in­tu­itions, value sys­tems and de­ci­sion the­o­ret­i­cal lean­ings of peo­ple in the EA move­ment (and its vicinity). He is look­ing for col­lab­o­ra­tors – please get in touch if you are in­ter­ested!

Re­lated an­nounce­ment 3: I wrote a sec­ond, more ad­vanced but less pol­ished piece on MSR im­pli­ca­tions that dis­cusses some tricky ques­tions and also sketches a highly ten­ta­tive pro­posal for how one were to take MSR into ac­count prac­ti­cally. If you en­joyed read­ing this piece and are cu­ri­ous to think more about the topic, I recom­mend read­ing on here (Google doc).