The most serious moral illusion: arbitrary group selection

Link post

Mo­ral illu­sions are spon­ta­neous, in­tu­itive moral judg­ments that are very per­sis­tent, but they vi­o­late our deep­est moral val­ues. They dis­tract us away from a ra­tio­nal, au­then­tic ethic. Prob­a­bly the most prob­le­matic moral illu­sion is ar­bi­trary group se­lec­tion. This moral illu­sion lies at the heart of dis­crim­i­na­tion and it makes us less effec­tive in do­ing good. In this ar­ti­cle I first ex­plain the two worst ex­am­ples of dis­crim­i­na­tion: how two kinds of ar­bi­trary group se­lec­tion cause us to harm oth­ers. Next, I pre­sent two other ex­am­ples why ar­bi­trary group se­lec­tion causes us to be less effec­tive in helping oth­ers.

What is ar­bi­trari­ness?

Ar­bi­trari­ness means se­lect­ing an el­e­ment or sub­set of a set, with­out fol­low­ing a rule. In gen­eral, there are two kinds of ar­bi­trari­ness: ver­ti­cal and hori­zon­tal. To ex­plain this, let’s start with a set con­tain­ing two el­e­ments: the num­bers 0 and 1. Next, we can con­struct the power set of that set, i.e. the set of all sub­sets. This power set con­tains the empty sub­set that has no el­e­ments (writ­ten as {}), two sub­sets with one el­e­ment (i.e. {1} and {2}) and one sub­set that has two el­e­ments (namely {1,2}).

Now I can se­lect a sub­set with ei­ther zero, one or two el­e­ments. This is a choice be­tween three car­di­nal­ities: ‘zero’, ‘one’ or ‘two’. A car­di­nal­ity of a sub­set mea­sures the num­ber of el­e­ments in that sub­set. Let’s sup­pose I ar­bi­trar­ily pick car­di­nal­ity ‘one’, i.e. the sub­set should have one el­e­ment. This choice for se­lect­ing a sub­set with one el­e­ment in­stead of a sub­set with zero or two el­e­ments, is ar­bi­trary, be­cause I did not fol­low a rule. I can­not ex­plain why the sub­set should have one in­stead of zero or two el­e­ments. This ar­bi­trary se­lec­tion of car­di­nal­ity is ver­ti­cal ar­bi­trari­ness.

After ar­bi­trar­ily se­lect­ing the car­di­nal­ity, I can se­lect a spe­cific sub­set. If the car­di­nal­ity is ‘one’, I can se­lect ei­ther {0} or {1}. Sup­pose I pick {1}, with­out fol­low­ing a rule. This ar­bi­trary se­lec­tion of a spe­cific sub­set within a car­di­nal­ity, is hori­zon­tal ar­bi­trari­ness. The rea­son why it is hori­zon­tal be­comes clear when we write the sub­sets in a di­a­mond shape: on top we have the sub­set {1,2}, the sec­ond level con­tains the two sub­sets {1} and {2}, and at the bot­tom we have the empty sub­set {}. Ver­ti­cal ar­bi­trari­ness means we ar­bi­trar­ily se­lect the level (e.g. the sec­ond level). Hori­zon­tal ar­bi­trari­ness means we ar­bi­trar­ily se­lect a sub­set at this level.

Now sup­pose in se­lect­ing the car­di­nal­ity I do fol­low a rule, such as “se­lect the high­est car­di­nal­ity”. This rule is spe­cial, be­cause it says that we should pick the car­di­nal­ity that avoids hori­zon­tal ar­bi­trari­ness and is not triv­ial. There are two car­di­nal­ities that log­i­cally avoid hori­zon­tal ar­bi­trari­ness: the high­est (i.e. ‘two’ in the above ex­am­ple) and the low­est (i.e. ‘zero’). The low­est car­di­nal­ity only con­tains the empty sub­set, so this choice is in a sense triv­ial. The high­est car­di­nal­ity is the only non-triv­ial car­di­nal­ity that avoids hori­zon­tal ar­bi­trari­ness: when we se­lect the car­di­nal­ity ‘two’ in the above ex­am­ple, we have no choice but to se­lect the sub­set {1,2}.

Now that we have a clear idea of the no­tion of ar­bi­trari­ness and the unique rule that avoids hori­zon­tal ar­bi­trari­ness, we can move to con­crete ex­am­ples of un­wanted ar­bi­trari­ness in ethics and how to avoid them.

Harm­ing oth­ers be­cause of discrimination

If we look at the two worst ex­am­ples of harm done to oth­ers, they are the re­sult of two kinds of dis­crim­i­na­tion: speciesism and na­tion­al­ism. Discrim­i­na­tion is a differ­ence of treat­ment be­tween in­di­vi­d­u­als or groups A and B, whereby three con­di­tions are met:

  1. A is treated bet­ter than B

  2. you would not tol­er­ate swap­ping po­si­tions (treat­ing A like B and vice versa)

  3. the differ­ence of treat­ment is based on ar­bi­trary crite­ria such as ar­bi­trary group mem­ber­ship.

The lat­ter con­di­tion means that dis­crim­i­na­tion re­lates to un­wanted ar­bi­trari­ness.

Ar­bi­trary biolog­i­cal group se­lec­tion: speciesism

Speciesism is the spon­ta­neous moral judg­ment that all mem­bers of par­tic­u­lar biolog­i­cal species are more im­por­tant (e.g. de­serve more or stronger rights) than mem­bers of other species. Re­spect­ing hu­man rights and at the same time re­ject­ing or vi­o­lat­ing the rights of non-hu­man an­i­mals, or con­sid­er­ing eat­ing chick­ens as per­mis­si­ble and eat­ing dogs as im­per­mis­si­ble, are two ex­am­ples of speciesism.

Speciesism in­volves both hori­zon­tal and ver­ti­cal ar­bi­trari­ness. First con­sider ver­ti­cal ar­bi­trari­ness. The biolog­i­cal clas­sifi­ca­tion can be con­sid­ered as a cab­i­net with sev­eral draw­ers. Each drawer cor­re­sponds with a way to di­vide in­di­vi­d­u­als into biolog­i­cal groups. I can open the bot­tom drawer of eth­nic groups (races) and say that I be­long to the eth­nic group of white peo­ple. Or I can open the sec­ond drawer from be­low, con­tain­ing all sub­species, and point at the sub­species Homo sapi­ens sapi­ens as my fa­vored group. But we also be­long to the species of hu­mans (Homo sapi­ens) in the third drawer. Or mov­ing higher in the cab­i­net: the fam­ily of great apes, the in­fraorder of simi­ans, the or­der of pri­mates, the in­fr­a­class of pla­cen­tals, the class of mam­mals, the phy­lum of ver­te­brates, the king­dom of an­i­mals. The high­est drawer con­tains only one group: the group of all en­tities in the uni­verse.

We are simian, as much as we are hu­man and mam­mal. So why would we open the third drawer from be­low and point at the species of hu­mans and de­clare that only those in­di­vi­d­u­als get ba­sic rights? Why not point­ing at other species or other cat­e­gories such as the class of mam­mals or the in­fraorder of simi­ans? None of the many defi­ni­tions of biolog­i­cal species (e.g. refer­ring to the pos­si­bil­ity of in­ter­breed­ing and get­ting fer­tile offspring) and none of the many de­scrip­tions of biolog­i­cal cat­e­gories (e.g. refer­ring to ge­neal­ogy and hav­ing com­mon an­ces­tors) con­tain any in­for­ma­tion about who should get the right to live or the right not to be abused. Why should ba­sic rights de­pend on fer­til­ity or an­ces­try? One could ar­gue that hav­ing a ra­tio­nal, moral self-con­scious­ness is the morally rele­vant prop­erty to grant some­one rights, and that only hu­mans have such a high level of con­scious­ness. Yet some hu­mans, such as ba­bies or men­tally dis­abled hu­mans, have men­tal ca­pac­i­ties not higher than those of some non-hu­man an­i­mals such as pigs. Then one could ob­ject that most mem­bers of the species of hu­mans do have that high level of con­scious­ness. But the same goes for the in­fraorder of simi­ans: most simi­ans al­ive to­day have a ra­tio­nal, moral self-con­scious­ness. So why not pick this in­fraorder as the crite­rion for mem­ber­ship of the moral com­mu­nity? One could re­ply that the species of hu­mans is the small­est biolog­i­cal group whose ma­jor­ity of mem­bers have a high level of con­scious­ness, but then we can ask the ques­tion why we should pick the small­est and not the largest biolog­i­cal group? A rule to pick the small­est biolog­i­cal group whose ma­jor­ity of mem­bers have a high level of con­scious­ness be­comes very far­fetched and always re­mains ar­bi­trary. Why pick a biolog­i­cal group and not sim­ply pick the group of in­di­vi­d­u­als who have a ra­tio­nal, moral self-con­scious­ness, ex­clud­ing men­tally dis­abled hu­mans? In the end it re­mains ar­bi­trary, be­cause what is the re­la­tion be­tween a biolog­i­cal clas­sifi­ca­tion and the no­tion of rights?

Next to ver­ti­cal ar­bi­trari­ness, speciesism in­volves hori­zon­tal ar­bi­trari­ness. After se­lect­ing the level of species in the biolog­i­cal hi­er­ar­chy (i.e. the third drawer from be­low), you have to se­lect a spe­cific species such as the species of hu­mans. This kind of speciesism where hu­mans are con­sid­ered cen­tral, is called an­thro­pocen­trism. This se­lec­tion for the hu­man species is ar­bi­trary, be­cause there are many other species and there is no spe­cial prop­erty that all and only hu­mans have. That means there is no rule that se­lects the hu­man species as the rele­vant species.

As ex­plained above, there is one drawer that is unique in the sense that we can fol­low a rule to se­lect that drawer. Take the drawer that con­tains only one group that is not empty. This is the top drawer, that con­tains the group of all en­tities. So we can avoid ar­bi­trari­ness by se­lect­ing this top drawer (i.e. the high­est car­di­nal­ity), and that means that all en­tities in the uni­verse equally de­serve ba­sic rights. Now the ques­tion be­comes: what are those ba­sic rights that can be granted to all en­tities with­out ar­bi­trary ex­clu­sion? One such ba­sic right is the right to bod­ily au­ton­omy: your body should not be used against your will as a means for some­one else’s ends. Of course, if an en­tity has no con­scious­ness, it has no sense of his or her body. Con­sider a com­puter: does its body in­crease when we plug in some ex­tra hard­ware? Where does its body end? The same can be said for plants: what is the body of a plant? Con­sider a clonal colony of above­ground trees that are con­nected with un­der­groud roots, such as an as­pen colony. If two above­ground trees are con­nected with one root, we can con­sider it as one liv­ing be­ing, but if we cut the root, are there now two liv­ing be­ings with two bod­ies? Also, if a plant does not have an or­gan such as a brain that cre­ates a will, it does not have a will and hence can­not be used against its will. This means that for in­sen­tient ob­jects such as com­put­ers and plants, the ba­sic right is always au­to­mat­i­cally re­spected. The ba­sic right is only non-triv­ial for sen­tient be­ings, be­cause they have a sense of their bod­ies and they have a will. Similarly, the ba­sic right to have your sub­jec­tive prefer­ences or well-be­ing fully taken into ac­count in moral con­sid­er­a­tions, is only non-triv­ial for sen­tient be­ings who have sub­jec­tive prefer­ences and a well-be­ing.

If we avoid ar­bi­trari­ness, we end up with some ba­sic rights that should be granted to all en­tities. Th­ese ba­sic rights are only non-triv­ial for sen­tient be­ings. Hence, we de­rived in­stead of merely as­sumed why sen­tience is im­por­tant. And now we see that in our world those ba­sic rights are vi­o­lated. The two biggest vi­o­la­tions of those rights oc­cur in food pro­duc­tion (i.e. live­stock farm­ing and fish­ing) and in na­ture (i.e. wild an­i­mal suffer­ing). Every year about 70 billion ver­te­brate land an­i­mals and a trillion fish are used against their will as means (food for hu­mans). Similarly, the well-be­ing of wild an­i­mals in na­ture is not fully taken into ac­count in our moral con­sid­er­a­tions. This re­sults in a lot of harm done to non-hu­man an­i­mals.

Some or­ga­ni­za­tions that fight against speciesism are: An­i­mal Ethics and Sen­tience In­sti­tute. Wild An­i­mal Ini­ti­a­tive wants to im­prove the well-be­ing of wild an­i­mals. An­i­mal Welfare Funds sup­ports or­ga­ni­za­tions that work on im­prov­ing the wellbe­ing and avoid­ing the suffer­ing of non­hu­man an­i­mals, es­pe­cially farmed an­i­mals.

Ar­bi­trary ge­o­graph­i­cal area se­lec­tion: nationalism

When we con­sider the harm done to hu­mans, prob­a­bly the biggest harm is caused by na­tion­al­ism. Na­tion­al­ism re­sults in a policy of mi­gra­tion re­stric­tions and closed bor­ders. This is harm­ful in many ways. First, ev­ery year more than 1000 re­fugees and mi­grants dieas a re­sult of the strict im­mi­gra­tion policy of the EU (‘Fortress Europe’). Se­cond, mi­gra­tion re­stric­tion re­sults in the biggest wage gap among work­ers: for equal work, work­ers in low and mid­dle in­come coun­tries earn three to ten times less than equally ca­pa­ble work­ers in high in­come coun­tries. Be­cause of the num­ber of peo­ple in­volved and the size of this global in­come gap, this is prob­a­bly the biggest kind of eco­nomic in­jus­tice wor­ld­wide. Third, the global la­bor mar­ket is not in an effec­tive eco­nomic mar­ket equil­ibrium. This re­sults in a huge loss of pro­duc­tivity, worth trillions of dol­lars. Global GDP (world in­come) could al­most dou­ble by open­ing bor­ders. That means that open bor­ders is prob­a­bly the most effec­tive means of poverty erad­i­ca­tion and hu­man de­vel­op­ment. Both na­tives in the host coun­tries, mi­grants and re­main­ing na­tives in the coun­tries of ori­gin can benefit from mi­gra­tion (the lat­ter can benefit from the re­mit­tances send by mi­grants to their re­main­ing fam­i­lies). Clos­ing bor­ders for im­mi­grants is a kind of harm com­pa­rable to stop­ping job ap­pli­cants and work­ers at the gates of com­pa­nies, or stop­ping cus­tomers at the doors of shops. This re­stric­tion of free­dom is not only harm­ful to the job ap­pli­cant, the worker or the cus­tomer, but also to the em­ployer and the shop­keeper.

The policy of closed na­tional bor­ders in­volves un­wanted ar­bi­trari­ness. There is a hi­er­ar­chy of ad­minis­tra­tive or ge­o­graph­i­cal ar­eas: the whole planet or the United Na­tions at the top, con­ti­nents at the next level, fol­lowed by unions of coun­tries (e.g. the EU), coun­tries, states or provinces, and fi­nally mu­ni­ci­pal­ities, coun­ties or towns at the bot­tom. Between those ar­eas at the same level there are bor­ders, but at most lev­els, these bor­ders be­tween ar­eas are open. For ex­am­ple in the US, there are open bor­ders be­tween states and mu­ni­ci­pal­ities. In the EU, there are open bor­ders be­tween coun­tries. So why should bor­ders be closed at some lev­els but not at oth­ers? Select­ing a level in this hi­er­ar­chy of ar­eas and stat­ing that bor­ders be­tween ar­eas at this level should be closed, is ar­bi­trary.

Next to this ver­ti­cal ar­bi­trari­ness, there is hori­zon­tal ar­bi­trari­ness, be­cause the lo­ca­tion of the bor­ders is ar­bi­trary. Why is the bor­der be­tween coun­tries A and B here and not there? Why is the bor­der be­tween the US and Mex­ico not 100 me­ters more to the north? The his­tor­i­cal rea­sons for these bor­der lo­ca­tions are ar­bi­trary.

There is in fact a third kind of ar­bi­trari­ness which I call in­ter­nal ar­bi­trari­ness. The US bor­der is not fully closed: it is very open for goods, cap­i­tal and tourists, but very closed for la­bor mi­grants and re­fugees. This dis­tinc­tion is ar­bi­trary: if bor­ders are closed out of fear of ter­ror­ists among im­mi­grants, then they should be closed for tourists as well, be­cause there can be ter­ror­ists among tourists. If they are closed be­cause some US work­ers are eco­nom­i­cally harmed by im­mi­gra­tion of work­ers, bor­ders should be closed for goods as well, be­cause im­ports of goods can also harm US work­ers.

Or­ga­ni­za­tions and plat­forms that sup­port open bor­ders and fight against na­tion­al­ism, are: Open Borders, Free Mi­gra­tion Pro­ject and UNITED for In­ter­cul­tural Ac­tion.

Not helping oth­ers be­cause of ineffectiveness

Next to harm­ing oth­ers, ar­bi­trari­ness also dis­turbs our choices to help oth­ers. When helping oth­ers, we choose less effec­tive means, which means that we do not help some other in­di­vi­d­u­als as much as we could with our scarce re­sources.

Ar­bi­trary prob­lem selection

Cause pri­ori­ti­za­tion is an im­por­tant re­search area in effec­tive al­tru­ism. The prob­lem is that we of­ten choose in­effec­tive means to help oth­ers, based on the way how we think about prob­lems or cause ar­eas and di­vide those prob­lems in sub­prob­lems and sub­sub­prob­lems.

Sup­pose you have a friend who died of skin can­cer, so you want to help pa­tients who have skin can­cer by donat­ing money to the Skin Cancer foun­da­tion. Skin can­cer is your cause area: the prob­lem that you want to solve. Of course, when your friend died of skin can­cer, s/​he also died of can­cer, so why would you not donate to the Na­tional Cancer In­sti­tute? Or donate to the Chronic Disease Fund, be­cause skin can­cer is a chronic dis­ease? You could ar­gue that the Na­tional Cancer In­sti­tute fo­cuses more on lung can­cer, and the Chronic Disease Fund fo­cuses more on car­dio­vas­cu­lar dis­eases, and you want to fo­cus on skin can­cer. How­ever, sup­pose that you find out that your friend died of a spe­cific type of skin can­cer, namely melanoma. And sup­pose that the Skin Cancer Foun­da­tion fo­cuses more on other types of skin can­cer. Would you now shift your dona­tions to­wards the Me­lanoma Foun­da­tion? What if there are sev­eral types of melanoma? So here we have a ver­ti­cal ar­bi­trari­ness: from melanoma at the bot­tom, to skin can­cer, can­cer, chronic dis­eases, dis­eases and fi­nally all suffer­ing at the top. This is a whole hi­er­ar­chy of prob­lems. And if you fo­cus on skin can­cer, there is hori­zon­tal ar­bi­trari­ness, be­cause there are other types of can­cer as well.

An effec­tive al­tru­ists asks the ques­tion: what is the real rea­son to donate to a char­ity such as the skin can­cer foun­da­tion? Is it be­cause a friend died of skin can­cer? In that case, the bad­ness is in the dy­ing. So you want to avoid pre­ma­ture deaths of other peo­ple. Your friend can­not be saved by donat­ing to the Skin Cancer Foun­da­tion, and if your friend died of lung can­cer, you would be equally con­cerned about that deadly dis­ease. So if you want to pre­vent pre­ma­ture deaths or save lives, and if you can save more lives by pre­vent­ing malaria than skin can­cers, fo­cus­ing on malaria is more effec­tive and should be cho­sen.

Another ex­am­ple: sup­pose you saw the doc­u­men­tary Black­fish about an­i­mal cru­elty in the dolphi­nar­ium SeaWorld, so you de­cide to sup­port an an­i­mal rights cam­paign against dolphi­naria. How­ever, an­i­mal suffer­ing in dolphi­naria is part of a big­ger prob­lem: an­i­mal cru­elty for en­ter­tain­ment, which also in­cludes cru­elty in an­i­mal cir­cuses. And this is part of an even big­ger prob­lem: an­i­mal cru­elty for plea­sure, which also in­cludes cru­elty in fac­tory farms, where an­i­mals are bred for our taste plea­sure. In re­turn, this is part of an even big­ger prob­lem: an­i­mal suffer­ing in gen­eral. Why is the cam­paign against dolphi­naria the right level of prob­lem? Why would you not fo­cus on a big­ger prob­lem? You could also go a level lower, by fo­cus­ing only on SeaWorld, be­cause that is what the doc­u­men­tary Black­fish was about.

Again an effec­tive al­tru­ists asks what is the real rea­son to fight against dolphi­naria. Is it to re­duce the suffer­ing of an­i­mals kept in cap­tivity? In that case, do­ing a cam­paign to de­crease meat con­sump­tion with only 0,1% re­sults in a stronger re­duc­tion than clos­ing down SeaWorld.

This prob­lem of ar­bi­trary prob­lem se­lec­tion re­lates to many cog­ni­tive bi­ases. First, there is a zero-risk bias, where you pre­fer to com­pletely elimi­nate one spe­cific risk or prob­lem al­though re­duc­ing an­other big­ger risk with a small frac­tion would re­sult in a greater re­duc­tion of over­all risk. Sup­pose deadly dis­ease A af­fects 1% of peo­ple, and vac­cine A re­duces dis­ease A with 100% (i.e. a com­plete elimi­na­tion from 1% to 0%). Deadly dis­ease B on the other hand af­fects 20% of peo­ple, and vac­cine B re­duces dis­ease B with 10% (from 20% to 18%). You have to choose be­tween ei­ther vac­cine A or B. Most peo­ple pre­fer vac­cine A, be­cause that im­plies we no longer have to worry about dis­ease A. Prob­lem A is com­pletely solved. Vac­cine B ap­pears to be more fu­tile, be­cause you will hardly no­tice a re­duc­tion from 20% to 18%. How­ever, the to­tal re­duc­tion of deadly dis­eases with vac­cine B is 2 per­centage points (from 21% to 19%), which is twice as high as the to­tal re­duc­tion with vac­cine A. The choice for vac­cine A is ir­ra­tional: sup­pose that I didn’t men­tion the differ­ence be­tween dis­eases A and B, and you be­lieved that they are both the same dis­ease Z which af­fects 21% of the pop­u­la­tion. Then you would pre­fer vac­cine B. Or sup­pose that we find out that dis­ease B has two types: dis­eases B1 and B2. Vac­cine B com­pletely elimi­nates dis­ease B1 which af­fected 2% of the pop­u­la­tion. Again you would now pre­fer vac­cine B.

A re­lated cog­ni­tive bias is fu­til­ity think­ing (ex­plained by Peter Unger in Liv­ing High and Let­ting Die which also has some ex­per­i­men­tal ev­i­dence). Sup­pose in­ter­ven­tion A helps 1000 of 3000 peo­ple in need, which means 33% of the af­fected pop­u­la­tion are saved. In­ter­ven­tion B helps 2000 of an­other 100.000 peo­ple, so 2% of this other af­fected pop­u­la­tion are saved. In ab­solute num­bers, in­ter­ven­tion B is twice as effec­tive, but a 2% re­duc­tion of the prob­lem B seems more fu­tile than a 33% re­duc­tion of prob­lem A. Here again we have a hi­er­ar­chy of af­fected pop­u­la­tions. We can con­sider the to­tal pop­u­la­tion of af­fected peo­ple, i.e. the 103.000 peo­ple to­gether. Or we can con­sider a sub­pop­u­la­tion af­fected by prob­lem B, namely the 2000 peo­ple that are saved. Now in­ter­ven­tion B helps 100% of this af­fected pop­u­la­tion. Com­pared to in­ter­ven­tion B, in­ter­ven­tion A seems more fu­tile.

Next we have the cer­tainty effect, which is a ver­sion of Allais para­dox. Sup­pose there are two poli­cies: with policy A ev­ery­one re­ceives 1000€, so there is a cer­tain benefit. Policy B gives 3000€ ar­bi­trar­ily to 50% of the pop­u­la­tion, and the other part re­ceives noth­ing. Every­one has a 50% prob­a­bil­ity of re­ceiv­ing 3000€. Although the to­tal re­ceived benefit is higher for policy B (3000 times 50% is higher than 1000 times 100%), this seems less fair and more risky than policy A, so a lot of peo­ple pre­fer policy A. How­ever, sup­pose the pop­u­la­tion is a sub­pop­u­la­tion of a coun­try: there are in fact ten re­gions in that coun­try, and only one of those re­gions is ar­bi­trar­ily cho­sen for the policy. So now only 10% of peo­ple re­ceive 1000€ with policy A, whereas policy B dis­tributes 3000€ to 5% of the pop­u­la­tion. Now for many peo­ple the prefer­ence for policy A be­comes less clear.

A strat­egy within effec­tive al­tru­ism to avoid these cog­ni­tive bi­ases and ar­bi­trary prob­lem se­lec­tion that makes us less effec­tive, is to start with con­sid­er­ing the whole prob­lem first. The whole prob­lem can be suffer­ing or loss of well-be­ing. Next we can fo­cus on hu­man suffer­ing or an­i­mal suffer­ing. Within hu­man suffer­ing, we can look for the most effec­tive ways to alle­vi­ate ex­treme poverty or pre­vent se­ri­ous dis­eases.

Ar­bi­trary prob­lem se­lec­tion also re­lates to an­other group of cog­ni­tive bi­ases that in­volve time. Time in­con­sis­tency is a cog­ni­tive bias where prefer­ences can change over time in in­con­sis­tent ways. Do you pre­fer to save one per­son to­day or two peo­ple next year? If sav­ing a per­son is some­thing like re­ceiv­ing money, most peo­ple dis­count the fu­ture and pre­fer to re­ceive one dol­lar to­day or save a per­son to­day in­stead of re­ceiv­ing two dol­lars or sav­ing two peo­ple next year. The in­con­sis­tency arises be­cause for most peo­ple, this is not the same dilemma as the choice be­tween sav­ing one per­son ten years from now ver­sus two per­sons eleven years from now. In this sec­ond choice, peo­ple pre­fer to save the two per­sons.

Similarly, pre­sen­tism is a moral the­ory that says it is bet­ter to help peo­ple who are al­ive to­day than to help peo­ple in the far fu­ture. We can see the ar­bi­trari­ness by look­ing at time in­ter­vals. We can di­vide time in in­ter­vals span­ning e.g. one day, or 100 years, or a mil­lion years. If you are a pre­sen­tist, you have to ask the ques­tion: do you help peo­ple who are al­ive to­day, or al­ive this year, or al­ive this cen­tury? Choos­ing a spe­cific time in­ter­val always in­volves ar­bi­trari­ness. Next to this ver­ti­cal ar­bi­trari­ness, there is hori­zon­tal ar­bi­trari­ness: sup­pose you pre­fer to help peo­ple who are al­ive this cen­tury. Why this cen­tury and not the next, or the 28th cen­tury? There are so many cen­turies to choose.

The only way to avoid this time in­con­sis­tency and time ar­bi­trari­ness when it comes to helping oth­ers, is to take the long-term per­spec­tive, i.e. con­sider the whole fu­ture. The whole fu­ture con­tains only one time in­ter­val, so there is no hori­zon­tal ar­bi­trari­ness. Be­cause of this time im­par­tial­ity, within the effec­tive al­tru­ism com­mu­nity there is a big fo­cus on im­prov­ing long-term out­comes.

Ar­bi­trary pro­ject selection

After se­lect­ing a prob­lem that we want to solve, we have to find effec­tive ways to solve it. The prob­lem is that some kind of ar­bi­trari­ness can sneak in our choices of pro­jects or in­ter­ven­tions. A pro­ject con­sists of sub­pro­jects and sub­sub­pro­jects. This re­lates to the cog­ni­tive bias of nar­row brack­et­ing, ex­plored by Rabin and Weiszäcker, where peo­ple eval­u­ate de­ci­sions sep­a­rately. This re­sults in in­con­sis­tent prefer­ences and a choice for less effec­tive means.

Con­sider two dilem­mas. Dilemma 1 gives you a choice be­tween op­tion A, sav­ing 4 lives and op­tion B, a 50% prob­a­bil­ity of sav­ing 10 lives and a 50% prob­a­bil­ity of sav­ing no-one. When it comes to sav­ing lives, many peo­ple are risk averse, which means they pre­fer the first op­tion: a cer­tainty to save 4 lives in­stead of a risky bet to save 10 lives.

Next, we have dilemma 2 that gives you a choice be­tween op­tion C, los­ing 4 lives and op­tion D, a 50% prob­a­bil­ity of los­ing 10 lives and a 50% prob­a­bil­ity of los­ing no-one. Ac­cord­ing to prospect the­ory, this fram­ing in terms of lives lost or peo­ple died, re­sults in a risk seek­ing at­ti­tude: peo­ple pre­fer the risky bet that gives a pos­si­bil­ity to lose no-one.

When we con­sider the two dilem­mas sep­a­rately, there is no con­flict be­tween risk aver­sion in the first dilemma and risk seek­ing in the sec­ond. But sup­pose those two dilem­mas are in fact two parts of one quadrilemma: a choice be­tween four op­tions. Let’s look at the com­bi­na­tion of the two dilem­mas. Op­tion AC means sav­ing 0 lives. Op­tion AD means los­ing 6 lives with 50% prob­a­bil­ity and sav­ing 4 lives with 50% prob­a­bil­ity. Op­tion BC gives 50% of los­ing 4 lives and 50% of sav­ing 6 lives. Op­tion BD gives 50% of sav­ing 0 lives, 25% of los­ing 10 lives and 25% of sav­ing 10 lives. Most peo­ple pre­fer A above B, and D above C, so they should pre­fer AD above BC. How­ever, op­tion BC is clearly bet­ter than op­tion AD.

Every pro­ject in­volves some risky out­comes. And to solve a prob­lem such as peo­ple dy­ing, sev­eral pro­jects can be com­bined into a big pro­ject, or be split into sev­eral smaller pro­jects. This cre­ates a ver­ti­cal hi­er­ar­chy of pro­jects and sub­pro­jects, or de­ci­sions and sub­de­ci­sions. To avoid ar­bi­trari­ness, we should look at the top level: the to­tal pro­ject or the sum of all our de­ci­sions.

For an effec­tive al­tru­ist, his or her to­tal pro­ject is what he or she does over the course of his or her life. That in­cludes all the de­ci­sions. That means an effec­tive al­tru­ist should not set time spe­cific tar­gets such as helping at least one per­son ev­ery year (or donat­ing at least 1000€ to a char­ity ev­ery year), be­cause if that is eas­ier, it might be bet­ter to help no-one in the first year and three peo­ple in the sec­ond year. A yearly tar­get is ar­bi­trary, be­cause one could equally set an­other tar­get to help ten peo­ple ev­ery de­cen­nium. The big­ger the time in­ter­val, the more flex­ible you can choose the best op­por­tu­ni­ties to help the most peo­ple. It might be bet­ter to spend a few years do­ing noth­ing but look­ing for the most im­por­tant prob­lems and the most effec­tive means to solve them. This seems like a waste of time be­cause you do not help any­one dur­ing those years. How­ever, af­ter those years, due to this re­search, you can be much more effec­tive in helping oth­ers. That is why effec­tive al­tru­ists spend a lot of time do­ing re­search and cause pri­ori­ti­za­tion.

Similarly, for the effec­tive al­tru­ism com­mu­nity, the to­tal pro­ject con­sists of all the de­ci­sions made by all effec­tive al­tru­ists over the whole fu­ture. Sup­pose each of ten effec­tive al­tru­ists has to make a de­ci­sion for a pro­ject or in­ter­ven­tion. They can fol­low two strate­gies. First, they can all choose the same pro­ject that has a cer­tain but small al­tru­is­tic re­turn on in­vest­ment. With this pro­ject, each of the ten effec­tive al­tru­ists saves one life for sure. A sec­ond strat­egy is to be­come more risk neu­tral: they can choose pro­jects that have a 10% prob­a­bil­ity of suc­cess, and if such a pro­ject suc­ceeds, it saves 100 peo­ple. Nine out of those ten effec­tive al­tru­ists will choose a pro­ject that most likely saves no-one. But one of those ten effec­tive al­tru­ists will win the jack­pot: that pro­ject saves 100 lives. Look­ing at the com­mu­nity of those ten effec­tive al­tru­ists to­gether: ac­cord­ing to the first strat­egy they saved 10 peo­ple, ac­cord­ing to the sec­ond they saved 100 peo­ple.

For an effec­tive al­tru­ist it doesn’t mat­ter who is the lucky win­ner who chose the effec­tive high-im­pact pro­ject. All that mat­ters is how many lives are saved by the com­mu­nity. This means that an effec­tive al­tru­ist should be­come more risk neu­tral in­stead of risk averse. With a risk neu­tral at­ti­tude, an effec­tive al­tru­ist is will­ing to take more high risk high im­pact de­ci­sions.

No comments.