Hedging against deep and moral uncertainty

Sum­mary: Like for quan­tified risk, we can some­times hedge against deep un­cer­tainty and moral un­cer­tainty: we can some­times choose a port­fo­lio of in­ter­ven­tions which looks good in ex­pec­ta­tion to all (or more) wor­ld­views—em­piri­cal and eth­i­cal be­liefs—we find plau­si­ble, even if each com­po­nent in­ter­ven­tion is plau­si­bly harm­ful or not par­tic­u­larly good in ex­pec­ta­tion ac­cord­ing to some plau­si­ble wor­ld­view. We can some­times do bet­ter than noth­ing in ex­pec­ta­tion when this wasn’t pos­si­ble by choos­ing a sin­gle in­ter­ven­tion, and we can of­ten im­prove the min­i­mum ex­pected value. I think do­ing so can there­fore some­times re­duce com­plex clue­less­ness.

My recom­men­da­tions are the fol­low­ing:

  1. We should, when pos­si­ble, avoid port­fo­lios (and in­ter­ven­tions) which are ro­bustly dom­i­nated by any other in ex­pec­ta­tion - those worse in ex­pec­ta­tion than an­other un­der all plau­si­ble wor­ld­views (or those ruled out by the max­i­mal­ity rule; EA Fo­rum post on the pa­per). I think this is ra­tio­nally re­quired un­der con­se­quen­tial­ism, as­sum­ing stan­dard ra­tio­nal­ity ax­ioms un­der un­cer­tainty.

  2. I fur­ther en­dorse choos­ing port­fo­lios among those that are ro­bustly pos­i­tive in ex­pec­ta­tion—bet­ter in ex­pec­ta­tion than do­ing noth­ing un­der all wor­ld­views we find plau­si­ble—if any are available. How­ever, this is more a per­sonal prefer­ence than a (con­di­tional) re­quire­ment like 1, al­though I think some­thing that’s of­ten im­plic­itly as­sumed in EA. I think this would lead us to al­lo­cate more to work for non­hu­man an­i­mals and s-risks.

  3. EAs should ac­count for in­ter­ac­tions be­tween causes and con­flicts in judge­ments about the sign of the ex­pected value of differ­ent in­ter­ven­tions ac­cord­ing to differ­ent wor­ld­views. I think this is be­ing some­what ne­glected as many EA or­ga­ni­za­tions or di­vi­sions in EA or­ga­ni­za­tions are cause-spe­cific.

  4. Ap­proaches for moral un­cer­tainty and deep un­cer­tainty are bet­ter ap­plied to port­fo­lios of in­ter­ven­tions than to each in­ter­ven­tion in iso­la­tion, since port­fo­lios can pro­mote win-wins.

  5. We should not com­mit to pri­ors ar­bi­trar­ily. If you don’t feel jus­tified in choos­ing one prior over all oth­ers (see the refer­ence class prob­lem), this is what sen­si­tivity anal­y­sis and other ap­proaches to de­ci­sion mak­ing un­der deep un­cer­tainty are for, and some­times hedg­ing can help, as I hope to illus­trate in this post.

Introduction

EAs have of­ten ar­gued against di­ver­sifi­ca­tion and for fund­ing only the most cost-effec­tive in­ter­ven­tion, at least for in­di­vi­d­ual donors where the marginal re­turns on dona­tions are roughly con­stant. How­ever, this as­sumes away a lot of un­cer­tainty we could have; we might not be­lieve any spe­cific in­ter­ven­tion is the most cost-effec­tive. Due to deep un­cer­tainty, we might not be will­ing to com­mit to a spe­cific sin­gle joint prob­a­bil­ity dis­tri­bu­tion for the effects of our in­ter­ven­tions, since we can’t jus­tify any choice over all oth­ers. Due to moral un­cer­tainty, we might not be con­fi­dent in how to eth­i­cally value differ­ent out­comes or ac­tions. This can re­sult in com­plex clue­less­ness, ac­cord­ing to which we just don’t know whether we should be­lieve a given in­ter­ven­tion is bet­ter or worse than an­other in ex­pec­ta­tion; it could go ei­ther way.

Some­times, us­ing a port­fo­lio of in­ter­ven­tions can be ro­bustly bet­ter in ex­pec­ta­tion than do­ing noth­ing, while none of the best in­di­vi­d­ual in­ter­ven­tions ac­cord­ing to some wor­ld­view are, since they’re each plau­si­bly harm­ful in ex­pec­ta­tion (whether or not we’re com­mit­ted to the claim that they definitely are harm­ful in ex­pec­ta­tion, since we may have deep or moral un­cer­tainty about that). For ex­am­ple, cost-effec­tive work in one cause might plau­si­bly harm an­other cause more in ex­pec­ta­tion, and we don’t know how to trade off be­tween the two causes.

We might ex­pect to find such ro­bustly pos­i­tive port­fo­lios in prac­tice where the in­di­vi­d­ual in­ter­ven­tions are not ro­bust, be­cause the in­ter­ven­tions most ro­bustly cost-effec­tive in one do­main, effect or wor­ld­view will not sys­tem­at­i­cally be the most harm­ful in oth­ers, and if they aren’t so harm­ful that they can’t be cost-effec­tively com­pen­sated for with in­ter­ven­tions op­ti­mized for cost-effec­tive­ness in those other do­mains, effects or wor­ld­views. The aim of this post is to make a more for­mal and EA-rele­vant illus­tra­tion of the fol­low­ing rea­son for hedg­ing:

We can some­times choose port­fo­lios which look good to all (or more) wor­ld­views we find plau­si­ble, even if each com­po­nent in­ter­ven­tion is plau­si­bly harm­ful or not par­tic­u­larly good in ex­pec­ta­tion ac­cord­ing to some plau­si­ble wor­ld­view.

Diver­sifi­ca­tion is of course not new in the EA com­mu­nity; it’s an ap­proach taken by Open Phil, and this post builds upon their “Strong un­cer­tainty” fac­tor, al­though most or­ga­ni­za­tions tend not to con­sider the effects of in­ter­ven­tions on non-tar­get causes/​wor­ld­views, where hedg­ing be­comes use­ful.

An illus­tra­tive example

I will as­sume, for sim­plic­ity, con­stant marginal cost-effec­tive­ness across each do­main/​effect/​wor­ld­view, and that the effects of the differ­ent in­ter­ven­tions are in­de­pen­dent of one an­other. De­creas­ing marginal cost-effec­tive­ness is also a sep­a­rate rea­son for di­ver­sifi­ca­tion, so by as­sum­ing a con­stant rate (which I ex­pect is also ap­prox­i­mately true for small donors), we can con­sider the un­cer­tainty ar­gu­ment in­de­pen­dently. (Thanks to Michael_Wiebe for point­ing this out.)

Sup­pose you have deep or moral un­cer­tainty about the effects of a given global health and poverty in­ter­ven­tion on non­hu­man an­i­mals, farmed or wild, enough un­cer­tainty that your ex­pected value for the in­ter­ven­tion ranges across pos­i­tive and nega­tive val­ues, where the nega­tive comes from effects on non­hu­man an­i­mals, due to moral un­cer­tainty about how to weigh the ex­pe­riences of non­hu­man an­i­mals and welfare in the wild, the meat eater prob­lem (it may in­crease an­i­mal product con­sump­tion) and deep un­cer­tainty about the effects on wild an­i­mals.

You could rep­re­sent the ex­pected cost-effec­tive­ness across these com­po­nents as a set of vec­tors with ranges. As­sum­ing in­de­pen­dent effects on each com­po­nent, you might write this as the fol­low­ing set, a box in 3 di­men­sions:

Here, the first com­po­nent, , is the range of ex­pected cost-effec­tive­ness for the hu­mans (liv­ing in poverty), the sec­ond, , for farmed an­i­mals, and the third, , for wild an­i­mals. Th­ese aren’t nec­es­sar­ily in the same com­pa­rable util­ity units across these three com­po­nents. The point is that two of the com­po­nents are plau­si­bly nega­tive in ex­pec­ta­tion, while the first is only pos­i­tive in ex­pec­ta­tion, and it’s plau­si­ble that the in­ter­ven­tion does more harm than good in ex­pec­ta­tion or does more good than harm in ex­pec­ta­tion. (Depend­ing on your kind of un­cer­tainty, you might be able to just add each of the com­po­nents in­stead, e.g. as , but I will con­tinue to illus­trate with sep­a­rate com­po­nents, since that’s more gen­eral and can cap­ture deeper un­cer­tainty and worse moral un­cer­tainty.)

You might also have an in­ter­ven­tion tar­get­ing farmed an­i­mals and deep or moral un­cer­tainty about its effects on wild an­i­mals. Sup­pose you rep­re­sent the ex­pected effec­tive­ness as fol­lows, with the effects on hu­mans first, then on farmed an­i­mals and then on wild an­i­mals, as be­fore:

You rep­re­sent the ex­pected effec­tive­ness for a wild an­i­mal in­ter­ven­tion as fol­lows, with the effects on hu­mans first, then on farmed an­i­mals and then on wild an­i­mals:

And fi­nally, you have a de­fault “do noth­ing” or “busi­ness as usual” op­tion, e.g. spend­ing self­ishly:

I model as all 0s, since I’m con­sid­er­ing the differ­ences in value with do­ing noth­ing, not the ex­pected value in the uni­verse.

Now, based on this ex­am­ple, we aren’t con­fi­dent that any of these in­ter­ven­tions are bet­ter in ex­pec­ta­tion than , do­ing noth­ing, and gen­er­ally, none of them definitely beat any other in ex­pec­ta­tion, so on this ba­sis, we might say all of them are per­mis­si­ble ac­cord­ing to the max­i­mal­ity rule. How­ever, there are port­fo­lios of these in­ter­ven­tions that are bet­ter than do­ing noth­ing. As­sum­ing a bud­get of 10 units, one such port­fo­lio (bet­ter than ) is the fol­low­ing:

That is, you spend 4 times as much on as , and 5 times as much on as . We can di­vide by 10 to nor­mal­ize. No­tice that each com­po­nents is strictly pos­i­tive, so that this port­fo­lio is good in ex­pec­ta­tion—bet­ter than (and ) - for hu­mans, farmed an­i­mals and wild an­i­mals si­mul­ta­neously.

Ac­cord­ing to recom­men­da­tion 1, , do­ing noth­ing, is now ruled out, im­per­mis­si­ble. This does not de­pend on the fact that I took differ­ences with do­ing noth­ing, since we can shift them all the same way.

Ac­cord­ing to recom­men­da­tion 2, which I only weakly en­dorse, each in­di­vi­d­ual in­ter­ven­tion is now ruled out, since each was plau­si­bly nega­tive (com­pared to do­ing noth­ing) in ex­pec­ta­tion, and we must choose among the port­fo­lios that are ro­bustly pos­i­tive in ex­pec­ta­tion. Similarly, port­fo­lios with only two in­ter­ven­tions are also ruled out, since at least one of their com­po­nents will have nega­tive val­ues in its range.

I con­structed this ex­am­ple by think­ing about how I could offset each in­ter­ven­tion’s harms with an­other’s. The ob­jec­tion that offset­ting is sub­op­ti­mal doesn’t ap­ply, since, by con­struc­tion, I can’t de­cide which of the in­ter­ven­tions is best in ex­pec­ta­tion, al­though I know it’s not do­ing noth­ing.

Note also that the cost-effec­tive­ness val­ues do not de­pend on when the effects oc­cur. Similarly, we can hedge over time: the plau­si­ble nega­tive effects of one in­ter­ven­tion can be made up for with pos­i­tive effects from an­other that oc­cur far ear­lier or later in time.

Dependence

Now, we as­sumed the in­ter­ven­tions’ com­po­nents were in­de­pen­dent of one an­other and of the other in­ter­ven­tions’ com­po­nents. With de­pen­dence, all the port­fo­lios that were ro­bustly at least as good as do­ing noth­ing will still be ro­bustly as good as do­ing noth­ing, since the lower bounds un­der the in­de­pen­dent case are lower bounds for the de­pen­dent case, but we could have more such port­fo­lios. On the other hand, differ­ent port­fo­lios could be­come dom­i­nated by oth­ers when mod­el­ling de­pen­dence that weren’t un­der the as­sump­tion of in­de­pen­dence.

Lex­i­cal­ity and de­on­tolog­i­cal constraints

Un­der some de­on­tolog­i­cal eth­i­cal the­o­ries, rule vi­o­la­tions (you com­mit) can’t be com­pen­sated for, no mat­ter how small. You could rep­re­sent rule vi­o­la­tions as - or mul­ti­ples of it, with­out mul­ti­ply­ing through, or us­ing vec­tors for in­di­vi­d­ual com­po­nents to cap­ture lex­i­cal­ity. Port­fo­lios that in­clude in­ter­ven­tions that vi­o­late some rule will gen­er­ally also vi­o­late that rule. How­ever, we should be care­ful to not force car­di­nal­iza­tion on the­o­ries that are only meant to be or­di­nal and do not or­der risky lot­ter­ies ac­cord­ing to stan­dard ra­tio­nal­ity ax­ioms; see some quotes from MacAskill’s the­sis here on this, and this sec­tion from MichaelA’s post.

Other po­ten­tial examples

  1. Pairing hu­man life-sav­ing in­ter­ven­tions with fam­ily plan­ning in­ter­ven­tions can po­ten­tially min­i­mize ex­ter­nal­ities due to hu­man pop­u­la­tion sizes, which we may have deep un­cer­tainty about (al­though this re­quires tak­ing a close look at the pop­u­la­tion effects of each, and it may not work out). Th­ese in­ter­ven­tions could even tar­get differ­ent re­gions based on par­tic­u­lar char­ac­ter­is­tics, e.g. av­er­age qual­ity of life, meat con­sump­tion. Coun­ter­fac­tu­ally re­duc­ing pop­u­la­tions where av­er­age welfare is worse (or meat con­sump­tion is higher) and in­creas­ing it the same amount where it’s bet­ter (or meat con­sump­tion is lower) in­creases av­er­age and to­tal hu­man welfare (or to­tal farmed an­i­mal welfare, as­sum­ing net nega­tive lives) with­out af­fect­ing hu­man pop­u­la­tion size. Of course, this is a care­ful bal­anc­ing act, es­pe­cially un­der deep un­cer­tainty. Fur­ther­more, there may re­main other im­por­tant ex­ter­nal­ities.

  2. We might find it plau­si­ble that in­cre­men­tal an­i­mal welfare re­form con­tributes to com­pla­cency and moral li­cens­ing and have deep un­cer­tainty about whether this is ac­tu­ally the case in ex­pec­ta­tion, but we might find more di­rect ad­vo­cacy in­ter­ven­tions that can com­pen­sate for this po­ten­tial harm so that their com­bi­na­tion is ro­bustly pos­i­tive.

  3. Ex­tinc­tion risk in­ter­acts with an­i­mal welfare in many ways: ex­tinc­tion would end fac­tory farm­ing, could wipe out all wild an­i­mals if com­plete, could pre­vent us from ad­dress­ing wild an­i­mal suffer­ing if only hu­mans go ex­tinct, and could al­low us to spread an­i­mal suffer­ing to other planets if we don’t go ex­tinct. There are other in­ter­ac­tions and cor­re­la­tions with s-risks, too, since things that risk ex­tinc­tion could also lead to far worse out­comes (e.g. AI risk, con­flict), or could pre­vent s-risks.

  4. An­i­mal ad­vo­cacy seems good for s-risks due to moral cir­cle ex­pan­sion, but there are also plau­si­ble effects go­ing in the op­po­site di­rec­tion, in­clud­ing cor­re­la­tions with en­vi­ron­men­tal­ism or “wrong” pop­u­la­tion ethics, near-misses and strate­gic threats.

  5. In the wild an­i­mal welfare space, I’ve been told about pairing in­ter­ven­tions that re­duce painful causes of death with pop­u­la­tion con­trol meth­ods to get around un­cer­tainty about the net welfare in the wild. In prin­ci­ple, with a port­fo­lio ap­proach, it may not be nec­es­sary to pair these in­ter­ven­tions on the same pop­u­la­tion to en­sure a pos­i­tive out­come in ex­pec­ta­tion, al­though ap­ply­ing them to the same pop­u­la­tion may pre­vent ecolog­i­cal risks and re­duce un­cer­tainty fur­ther.

  6. Sub­sti­tu­tion effects be­tween an­i­mal prod­ucts. We might have moral un­cer­tainty about the sign of the ex­pected value of an in­ter­ven­tion rais­ing the price of fish, in case it leads con­sumers to eat more chicken, and similarly for an in­ter­ven­tion rais­ing the price of chicken, in case it leads con­sumers to eat more fish. Com­bin­ing both in­ter­ven­tions can re­duce both chicken and fish con­sump­tion. As be­fore, these in­ter­ven­tions do not have to even tar­get the same re­gion, as long as the in­crease in fish con­sump­tion in the one re­gion is smaller than the in­crease in the other (as­sum­ing similar welfare, amount of product per an­i­mal, etc. or tak­ing these into ac­count), and the same for chicken con­sump­tion.

Ques­tions and pos­si­ble implications

  • I think recom­men­da­tion 2 would push us par­tially away from global health and poverty work and ex­tinc­tion risk work and to­wards work for non­hu­man an­i­mals and s-risks, due to the in­ter­ac­tions I dis­cuss above.

  • Should we choose port­fo­lios as in­di­vi­d­u­als or as a com­mu­nity? If as a com­mu­nity, and we en­dorse recom­men­da­tion 2 for the com­mu­nity, i.e. we should do ro­bustly bet­ter in ex­pec­ta­tion than do­ing noth­ing, in­di­vi­d­u­als may be re­quired to fo­cus on plau­si­ble do­mains/​wor­ld­views/​effects ac­cord­ing to which the com­mu­nity is plau­si­bly do­ing more harm than good in ex­pec­ta­tion, if any ex­ists. This could mean many more EAs should fo­cus on work for non­hu­man an­i­mals and s-risks, since global health and poverty work and ex­tinc­tion risk work, some of the largest parts of the EA port­fo­lio, are plau­si­bly net nega­tive due to in­ter­ac­tions with these.

    • I per­son­ally doubt that we have fun­da­men­tal rea­sons to de­cide as a com­mu­nity (co­or­di­na­tion and co­op­er­a­tion are in­stru­men­tal rea­sons). Either our (moral) rea­sons are agent-rel­a­tive or agent-neu­tral/​uni­ver­sal; they are not rel­a­tive to some spe­cific and fairly ar­bi­trar­ily defined group like the EA com­mu­nity.

  • Should we model the differ­ence com­pared to do­ing noth­ing and use do­ing noth­ing as a bench­mark as I en­dorse in recom­men­da­tion 2, or just model the over­all out­comes un­der each in­ter­ven­tion (or, more tractably, all pair­wise differ­ences, al­low­ing us to ig­nore what’s un­af­fected)? What I en­dorse seems similar to similar risk-aver­sion with re­spect to the differ­ence you make by cen­ter­ing the agent, which Snow­den claims is in­com­pat­i­ble with im­par­tial­ity. In this case, rather than risk-aver­sion, it’s closer to un­cer­tainty/​am­bi­guity aver­sion. It also seems non-con­se­quen­tial­ist, since it treats one op­tion differ­ently from the rest, and con­se­quen­tial­ism usu­ally as­sumes no fun­da­men­tal differ­ence be­tween acts and omis­sions (and the con­cept of omis­sion it­self may be shaky).

  • What other plau­si­ble EA-rele­vant ex­am­ples are there where hedg­ing can help by com­pen­sat­ing for plau­si­ble ex­pected harms?

  • Can we jus­tify stronger rules if we as­sume more struc­ture to our un­cer­tainty, short of spec­i­fy­ing full dis­tri­bu­tions? What if I think one wor­ld­view is more likely than an­other, but I can’t com­mit to ac­tual prob­a­bil­ities? What if I’m will­ing to say some­thing about the differ­ence or ra­tio of prob­a­bil­ities?