Asymmetric altruism

In effec­tive al­tru­ism, we have to pri­ori­tize the most effec­tive ways to do good. But there are differ­ent no­tions of al­tru­ism that in­fluence our pri­ori­ti­za­tion. Altru­ism has to do with helping oth­ers. But the tricky ques­tion be­comes: helping who ex­actly? And what is helping? I will ar­gue that we have to make a dis­tinc­tion be­tween pos­i­tive ver­sus nega­tive al­tru­ism, and that this dis­tinc­tion be­comes im­por­tant in effec­tive al­tru­is­tic pri­ori­ti­za­tion.

To start, con­sider a per­son who is about to un­dergo a sur­gi­cal op­er­a­tion. At time 1, be­fore the op­er­a­tion, the per­son is fully con­scious and has men­tal state P1. We can choose be­tween two op­tions A and B. At time 2, dur­ing the op­er­a­tion, the per­son can ei­ther have anes­the­sia (op­tion A), or not (op­tion B). This can be de­scribed with two pos­si­ble wor­lds. In the world A where we choose for the anes­the­sia, the anes­thetized per­son is un­con­scious, hav­ing an empty men­tal state P2A=0 (i.e. no sub­jec­tive ex­pe­riences and prefer­ences). In the sec­ond world, the pa­tient does not get the anes­the­sia and will be in ex­treme agony, with men­tal state P2B. Altru­is­ti­cally speak­ing, it is bet­ter to choose op­tion A, be­cause this is helping the pa­tient. Giv­ing the anes­the­sia is some­thing the pa­tient wants.

In the case of the sur­gi­cal op­er­a­tion, it is clear who is be­ing helped. We can con­sider the men­tal states P1, P2A and P2B as be­long­ing to the same per­son, be­cause those men­tal states are re­lated to each other. In par­tic­u­lar, the per­son at time 1 with men­tal state P1 is con­cerned about his/​her own fu­ture and hence iden­ti­fies him/​her­self with the fu­ture men­tal states P2A and P2B. Similarly, the per­son with men­tal state P2B can ac­knowl­edge that he/​she is the same per­son as P1 as well as P2A. P2A is ba­si­cally P2B’s al­ter ego in the other pos­si­ble world. A slightly tricky is­sue arises when we con­sider P2A, who is un­con­scious and hence not able to feel a per­sonal iden­tity with nei­ther P1 nor P2A. P2A has no be­liefs, and hence no be­lief that he/​she is the same per­son as P1. Still, given the be­liefs of P1 and P2B, we can con­sider P1, P2A and P2B as the same per­son, and the anes­the­sia helps that per­son.

Is ve­g­anism al­tru­is­tic for an­i­mals?

Giv­ing anes­the­sia to the pa­tient is a clear ex­am­ple of al­tru­ism: it helps the other. But what about ve­g­anism? An­i­mal farm­ing causes an­i­mal suffer­ing. Al­most all farm an­i­mals have very nega­tive ex­pe­riences. We can avoid this suffer­ing, by eat­ing ve­gan. But that means those farm an­i­mals would not be born and hence not ex­ist.

Con­sider at time 1 a bunch of atoms and molecules float­ing around. This group of molecules has an empty men­tal state P1=0. Then we have a choice to eat ve­gan (op­tion A) or eat meat (op­tion B). Op­tion A means those atoms will keep float­ing around, hav­ing again an empty men­tal state P2A=0. Only in op­tion B will those atoms re­ar­range them­selves to cre­ate a men­tal state P2B in an an­i­mal brain. P2B has un­wanted nega­tive ex­pe­riences.

If we choose op­tion A, are we helping the an­i­mal? Which an­i­mal? The an­i­mal does not ex­ist in op­tion A: the men­tal state P2A was empty. P1 also is an empty men­tal state, which means no iden­ti­fi­ca­tion with nei­ther P2A nor P2B. And it is very un­likely that an­i­mal P2B can iden­tify him/​her­self with the non-ex­ist­ing an­i­mals (i.e. the bunch of molecules) P1 and P2A. Hence, P1, P2A and P2B can­not be con­sid­ered as the same per­son. So, are we re­ally helping an an­i­mal when we choose a situ­a­tion where the an­i­mal does not ex­ist?

Is sav­ing the fu­ture al­tru­is­tic?

Next, we can con­sider ex­is­ten­tial risks: situ­a­tions that lead to the ex­tinc­tion of in­tel­li­gent or sen­tient life. At time 1, fu­ture gen­er­a­tions are not born yet, and hence they can be rep­re­sented by a bunch of atoms hav­ing empty men­tal states P1=0. Then we can choose be­tween two op­tions: ei­ther we do not avoid the ex­is­ten­tial catas­tro­phe, which means those atoms will have a fu­ture empty state P2A=0. Or we pre­vent the ex­tinc­tion, which means those atoms will re­ar­range them­selves and fu­ture peo­ple will be born, hav­ing men­tal states P2B.

If we choose op­tion B, are we helping those fu­ture peo­ple? Yes, be­cause those peo­ple will ex­ist in world B. But if we choose op­tion A, are we harm­ing those peo­ple? No, be­cause those peo­ple will never ex­ist in world A.

Pos­i­tive ver­sus nega­tive altruism

It is time to con­sider two kinds of al­tru­ism. Pos­i­tive al­tru­ism means: choos­ing what some­one else wants. Nega­tive al­tru­ism, on the other hand, means: not choos­ing what some­one else does not want. This is a bit re­lated to the two ver­sions of the golden rule: “Treat oth­ers in ways that you want to be treated”, ver­sus “Do not treat oth­ers in ways that you do not want to be treated.”

By choos­ing the anes­the­sia, we are al­tru­is­tic in both pos­i­tive and nega­tive senses. We choose what per­son P1 wants (the anes­the­sia), and we do not choose what per­son P2B does not want (the suffer­ing). By choos­ing ve­g­anism, we are only be­ing nega­tively al­tru­is­tic: we do not choose what per­son P2B does not want. And by choos­ing to avoid the ex­is­ten­tial risk, we are only be­ing pos­i­tively al­tru­is­tic: we choose what peo­ple with men­tal states P2B want.

When we have to pri­ori­tize be­tween differ­ent ways to do good, the ques­tion is whether dou­ble al­tru­ism (i.e. both pos­i­tive and nega­tive al­tru­ism) is more valuable than sin­gle al­tru­ism, and whether sin­gle pos­i­tive al­tru­ism is more valuable than sin­gle nega­tive al­tru­ism. How can we tell which is most im­por­tant?

It can be ar­gued that dou­ble al­tru­ism is twice as good as sin­gle al­tru­ism, in the sense that dou­ble al­tru­ism takes into ac­count the prefer­ences of two men­tal states P1 and P2B, whereas sin­gle al­tru­ism only con­sid­ers P2B. Hence, when choos­ing be­tween dou­ble and sin­gle al­tru­ism, dou­ble al­tru­ism can be pri­ori­tized (all else equal, hence as­sum­ing the prefer­ences or wants are equally strong in the differ­ent situ­a­tions).

But sup­pose we have to choose be­tween sin­gle pos­i­tive and sin­gle nega­tive al­tru­ism. For ex­am­ple: should we pri­ori­tize ve­g­anism or safe­guard­ing the fu­ture (as­sum­ing that an equal amount of an­i­mals and po­ten­tial fu­ture be­ings are in­volved, with equally strong prefer­ences for op­tion B)? We see a lot of asym­me­tries in ethics (e.g. kil­ling some­one is worse than not sav­ing some­one, and caus­ing the ex­is­tence of some­one who con­stantly suffers is always bad whereas caus­ing the ex­is­tence of some­one who is always happy is not always good). Some asym­me­tries can be defended (see e.g. here), and I tend to be­lieve that nega­tive al­tru­ism is more valuable than pos­i­tive al­tru­ism. If nega­tive al­tru­ism is con­sid­ered very im­por­tant, then ve­g­anism be­comes more im­por­tant.

In the­ory, we can solve this is­sue by be­ing al­tru­is­tic: let the oth­ers de­cide. In par­tic­u­lar: ask the farm an­i­mals and the fu­ture gen­er­a­tions whether they pri­ori­tize nega­tive al­tru­ism above pos­i­tive al­tru­ism. But that is of course un­fea­si­ble. How to weigh pos­i­tive ver­sus nega­tive al­tru­ism is a ques­tion I will leave for fur­ther in­ves­ti­ga­tions.