What consequences?

This is the first in a se­ries of posts ex­plor­ing con­se­quen­tial­ist clue­less­ness and its im­pli­ca­tions for effec­tive al­tru­ism:

  • This post de­scribes clue­less­ness & its rele­vance to EA; ar­gu­ing that for many pop­u­lar EA in­ter­ven­tions we don’t have a clue about the in­ter­ven­tion’s over­all net im­pact.

  • The sec­ond post con­sid­ers a po­ten­tial re­ply to con­cerns about clue­less­ness.

  • The third post ex­am­ines how tractable clue­less­ness is – to what ex­tent we can grow more clue­ful about an in­ter­ven­tion through in­ten­tional effort?

  • The fourth post dis­cusses how we might do good while be­ing clue­less to an im­por­tant ex­tent.

My prior is that clue­less­ness pre­sents a profound challenge to effec­tive al­tru­ism in its cur­rent in­stan­ti­a­tion, and that we need to rad­i­cally re­vise our be­liefs about do­ing good such that we pri­ori­tize ac­tivi­ties that are ro­bust to moral & em­piri­cal un­cer­tainty.

My goal in writ­ing this piece is to elu­ci­date this po­si­tion, or to dis­cover why it’s mis­taken. I’m post­ing in se­rial form to al­low more op­por­tu­nity for fo­rum read­ers to change my mind about clue­less­ness and its im­pli­ca­tions.


By “clue­less­ness”, I mean the pos­si­bil­ity that we don’t have a clue about the over­all net im­pact of our ac­tions.[1] Another way of fram­ing this con­cern: when we think about the con­se­quences of our ac­tions, how do we de­ter­mine what con­se­quences we should con­sider?

First, some defi­ni­tions. The con­se­quences of an ac­tion can be di­vided into three cat­e­gories:

  • Prox­i­mate con­se­quences – the im­me­di­ate effects that oc­cur soon af­ter­ward to in­tended ob­ject(s) of an ac­tion. Rel­a­tively easy to ob­serve and mea­sure.

  • Indi­rect con­se­quences – the effects that oc­cur soon af­ter­ward to un­in­tended ob­ject(s) of an ac­tion. Th­ese could also be termed “cross-stream” effects. Rel­a­tively difficult to ob­serve and mea­sure.

  • Long-run con­se­quences – the effects of an ac­tion that oc­cur much later, in­clud­ing effects on both in­tended and un­in­tended ob­jects. Th­ese could also be termed “down­stream” effects. Im­pos­si­ble to ob­serve and mea­sure; most long-run con­se­quences can only be es­ti­mated.[2]


Effec­tive al­tru­ist ap­proaches to­wards consequences

EA-style rea­son­ing ad­dresses con­se­quen­tial­ist clue­less­ness in one of two ways:

1. The brute-good ap­proach – col­laps­ing the con­se­quences of an ac­tion into a prox­i­mate “brute-good” unit, then com­par­ing the ag­gre­gate “brute-good” con­se­quences of mul­ti­ple in­ter­ven­tions to de­ter­mine the in­ter­ven­tion with the best (brute good) con­se­quences.

    • For ex­am­ple, GiveWell uses “deaths averted” as a brute-good unit, then con­verts other im­pacts of the in­ter­ven­tion be­ing con­sid­ered into “deaths-averted equiv­a­lents”, then com­pares in­ter­ven­tions to each other us­ing this com­mon unit.

    • This ap­proach is com­mon among the cause ar­eas of an­i­mal welfare, global de­vel­op­ment, and EA coal­i­tion-build­ing.

2. The x-risk re­duc­tion ap­proach – sim­plify­ing “do the ac­tions with the best con­se­quences” into “do the ac­tions that yield the most ex­is­ten­tial-risk re­duc­tion.” Prox­i­mate & in­di­rect con­se­quences are only con­sid­ered in­so­far as they bear on x-risk; the main fo­cus is on the long-run: whether or not hu­man­ity will sur­vive into the far fu­ture.

    • Nick Bostrom makes this ex­plicit in his es­say, Astro­nom­i­cal Waste: “The util­i­tar­ian im­per­a­tive ‘Max­i­mize ex­pected ag­gre­gate util­ity!’ can be sim­plified to the maxim ‘Min­i­mize ex­is­ten­tial risk!’”

    • This ap­proach is com­mon among the x-risk re­duc­tion cause area.

EA fo­cus can be imag­ined as a bi­modal dis­tri­bu­tion – EA ei­ther con­sid­ers only the prox­i­mate effects of an in­ter­ven­tion, ig­nor­ing its in­di­rect & long-run con­se­quences; or con­sid­ers only the very long-run effects of an in­ter­ven­tion (i.e. to what ex­tent the in­ter­ven­tion re­duces x-risk), con­sid­er­ing all prox­i­mate & in­di­rect effects only in­so­far as they bear on x-risk re­duc­tion.[3]

Con­se­quences that fall be­tween these two peaks of at­ten­tion are not in­cluded in EA’s moral calcu­lus, nor are they ex­plic­itly de­ter­mined to be of neg­ligible im­por­tance. In­stead, they are men­tioned in pass­ing, or ig­nored en­tirely.

This is prob­le­matic. It’s likely that for most in­ter­ven­tions, these con­se­quences com­pose a sub­stan­tial por­tion of the in­ter­ven­tion’s over­all im­pact.


Clue­less­ness and the brute-good approach

The clue­less­ness prob­lem for the brute-good ap­proach can be stated as fol­lows:

Due to the difficulty of ob­serv­ing and mea­sur­ing in­di­rect & long-run con­se­quences of in­ter­ven­tions, we do not know the bulk of the con­se­quences of any in­ter­ven­tion, and so can­not con­fi­dently com­pare the con­se­quences of one in­ter­ven­tion to an­other. Com­par­ing only the prox­i­mate effects of in­ter­ven­tions as­sumes that prox­i­mate effects com­pose the ma­jor­ity of in­ter­ven­tions’ im­pact, whereas in re­al­ity the bulk of an in­ter­ven­tion’s im­pact is com­posed of in­di­rect & long-run effects which are difficult to ob­serve and difficult to es­ti­mate.[4]

The brute-good ap­proach of­ten im­plic­itly as­sumes sym­me­try of non-prox­i­mate con­se­quences (i.e. for ev­ery in­di­rect & long-run con­se­quence, there is an equal and op­po­site con­se­quence such that in­di­rect & long-run con­se­quences can­cel out and only prox­i­mate con­se­quences mat­ter). This as­sump­tion seems poorly sup­ported.[5]

It might be thought that in­di­rect & long-run con­se­quences can be sur­faced as part of the de­ci­sion-mak­ing pro­cess, then in­cluded in the de­ci­sion-maker’s calcu­lus. This seems very difficult to do in a be­liev­able way (i.e. a way in which we feel con­fi­dent that we’ve un­cov­ered all cru­cial con­sid­er­a­tions). I will con­sider this is­sue fur­ther in the next post of this se­ries.

Some ex­am­ples fol­low, to make the clue­less­ness prob­lem for the brute-good ap­proach salient.

Ex­am­ple: baby Hitler

Con­sider the po­si­tion of an Aus­trian physi­cian in the 1890s who was called to tend to a sick in­fant, Adolf Hitler.

Con­sid­er­ing only prox­i­mate effects, the physi­cian should clearly have treated baby Hitler and made efforts to en­sure his sur­vival. But the pic­ture is clouded when in­di­rect & long-run con­se­quences are added to the calcu­lus. Per­haps let­ting baby Hitler die (or even com­mit­ting in­fan­ti­cide) would have been bet­ter in the long-run. Or per­haps the Ger­man zeit­geist of the 1920s and 30s was such that the ter­rors of Nazism would have been un­leashed even ab­sent Hitler’s lead­er­ship. Re­gard­less, the de­ci­sion to minister to Hitler as a sick in­fant is not straight­for­ward when in­di­rect & long-run con­se­quences are con­sid­ered.

A po­ten­tial ob­jec­tion here is that the Aus­trian physi­cian could in no way have fore­seen that the in­fant they were called to tend to would later be­come a ter­rible dic­ta­tor, so the physi­cian should have done what seemed best given the in­for­ma­tion they could un­cover. But this ob­jec­tion only high­lights the difficulty pre­sented by clue­less­ness. In a very literal sense, a physi­cian in this po­si­tion is clue­less about what ac­tion would be best. Assess­ing only prox­i­mate con­se­quences would provide some guidance about what ac­tion to take, but this guidance would not nec­es­sar­ily point to the ac­tion with the best con­se­quences in the long run.

Ex­am­ple: bed­net dis­tri­bu­tions in un­sta­ble regions

The Against Malaria Foun­da­tion (AMF) funds bed net dis­tri­bu­tions in de­vel­op­ing coun­tries, with the goal of re­duc­ing malaria in­ci­dence. In 2017, AMF funded its largest dis­tri­bu­tion to date, over 12 mil­lion nets in Uganda.

Uganda has had a chronic prob­lem with ter­ror groups, no­tably the Lord’s Re­sis­tance Army op­er­at­ing in the north and Al-Shabab car­ry­ing out at­tacks in the cap­i­tal. Though the coun­try is be­lieved to be rel­a­tively sta­ble at pre­sent, there re­main non-neg­ligible risks of civil war or gov­ern­ment over­throw.

Con­sid­er­ing only the prox­i­mate con­se­quences, dis­tribut­ing bed­nets in Uganda is prob­a­bly a highly cost-effec­tive method of re­duc­ing malaria in­ci­dence and sav­ing lives. But this as­sess­ment is mud­died when in­di­rect and long-run effects are also con­sid­ered.

Per­haps sav­ing the lives of young chil­dren re­sults in in­creas­ing the sup­ply of child-sol­dier re­cruits for rebel groups, lead­ing to in­creased re­gional in­sta­bil­ity.

Per­haps im­port­ing & dis­tribut­ing mil­lions of for­eign-made bed nets dis­rupts lo­cal sup­ply chains and breeds Ugan­dan re­sent­ment to­ward for­eign aid.

Per­haps sta­bi­liz­ing the child mor­tal­ity rate dur­ing a pe­riod of fun­da­men­tal­ist-Chris­tian re­vival in­creases the prob­a­bil­ity of a fun­da­men­tal­ist-Chris­tian value sys­tem be­com­ing locked in, which could prove prob­le­matic fur­ther down the road.

I’m not claiming that any of the above are likely out­comes of large-scale bed net dis­tri­bu­tions. The claim is that the above are all pos­si­ble effects of a large-scale bed net dis­tri­bu­tion (each with a non-neg­ligible, un­known prob­a­bil­ity), and that due to many pos­si­ble effects like this, we are prospec­tively clue­less about the over­all im­pact of a large-scale bed net dis­tri­bu­tion.

Ex­am­ple: di­rect-ac­tion an­i­mal-welfare interventions

Some an­i­mal welfare ac­tivists ad­vo­cate di­rect ac­tion, the prac­tice of di­rectly con­fronting prob­le­matic food in­dus­try prac­tices.

In 2013, an­i­mal-welfare ac­tivists or­ga­nized a “die-in” at a San Fran­cisco Chipo­tle. At the die-in, ac­tivists con­fronted Chipo­tle con­sumers with claims about the harm in­flicted on farm an­i­mals by Chipo­tle’s sup­ply chain.

The die-in likely had the prox­i­mate effect of rais­ing aware­ness of an­i­mal welfare among the Chipo­tle con­sumers and em­ploy­ees who were pre­sent dur­ing the demon­stra­tion. In­creas­ing so­cial aware­ness of an­i­mal welfare is prob­a­bly pos­i­tive ac­cord­ing to con­se­quen­tial­ist per­spec­tives that give moral con­sid­er­a­tion to an­i­mals.

How­ever, if con­sid­er­ing in­di­rect and long-run con­se­quences as well, the over­all im­pact of di­rect ac­tion demon­stra­tions like the die-in is un­clear. Highly con­fronta­tional demon­stra­tions may re­sult in the an­i­mal welfare move­ment be­ing la­beled “rad­i­cal” or “dan­ger­ous” by the main­stream, thus limit­ing the move­ment’s in­fluence.

Con­fronta­tional tac­tics may also be con­tro­ver­sial within the an­i­mal welfare move­ment, caus­ing di­vi­sive­ness and po­ten­tially lead­ing to a schism, which could harm the move­ment’s effi­cacy.

Again, I’m not claiming that the above are likely effects of di­rect-ac­tion an­i­mal-welfare in­ter­ven­tions. The claim is that in­di­rect & long-run effects like this each have a non-neg­ligible, un­known prob­a­bil­ity, such that we are prospec­tively clue­less re­gard­ing the over­all im­pact of the in­ter­ven­tion.


Clue­less­ness and the ex­is­ten­tial risk re­duc­tion approach

Un­like the brute-good ap­proach, which tends to over­weight the im­pact of prox­i­mate effects and un­der­weight that of in­di­rect & long-run effects, the x-risk re­duc­tion ap­proach fo­cuses al­most ex­clu­sively on the long-run con­se­quences of ac­tions (i.e. how they effect the prob­a­bil­ity that hu­man­ity sur­vives into the far fu­ture). In­ter­ven­tions can be com­pared ac­cord­ing to a com­mon crite­rion: the amount by which they are ex­pected to re­duce ex­is­ten­tial risk.

While I think clue­less­ness poses less difficulty for the x-risk re­duc­tion ap­proach, it re­mains prob­le­matic. The clue­less­ness prob­lem for the x-risk re­duc­tion ap­proach can be stated as fol­lows:

In­ter­ven­tions aimed at re­duc­ing ex­is­ten­tial risk have a clear crite­rion by which to make com­par­i­sons: “which in­ter­ven­tion yields a larger re­duc­tion in ex­is­ten­tial risk?” How­ever, be­cause the in­di­rect & long-run con­se­quences of any spe­cific x-risk in­ter­ven­tion are difficult to ob­serve, mea­sure, and es­ti­mate, ar­riv­ing at a be­liev­able es­ti­mate of the amount of x-risk re­duc­tion yielded by an in­ter­ven­tion is difficult. Be­cause it is difficult to ar­rive at be­liev­able es­ti­mates of the amount of x-risk re­duc­tion yielded by in­ter­ven­tions, we are some­what clue­less when try­ing to com­pare the im­pact of one x-risk in­ter­ven­tion to an­other.

An ex­am­ple fol­lows to make this salient.

Ex­am­ple: strato­spheric aerosol in­jec­tion to blunt im­pacts of cli­mate change

In­ject­ing sul­fate aerosols into the strato­sphere has been put for­ward as an in­ter­ven­tion that could re­duce the im­pact of cli­mate change (by re­flect­ing sun­light away from the earth, thus cool­ing the planet).

How­ever, it’s pos­si­ble that strato­spheric aerosol in­jec­tion could have un­in­tended con­se­quences, such as cool­ing the planet so much that the sur­face is ren­dered un­in­hab­it­able (in­ci­den­tally, this is the back­ground story of the film Snow­piercer). Be­cause aerosol in­jec­tion is rel­a­tively cheap to do (on the or­der of tens of billions USD), there is con­cern that small na­tion-states, es­pe­cially those dis­pro­por­tionately af­fected by cli­mate change, might de­ploy aerosol in­jec­tion pro­grams with­out the con­sent or fore­knowl­edge of other coun­tries.

Given this strate­gic land­scape, the effects of call­ing at­ten­tion to strato­spheric aerosol in­jec­tion as a cause are un­clear. It’s pos­si­ble that fur­ther pub­lic-fac­ing work on the in­ter­ven­tion re­sults in in­ter­na­tional agree­ments gov­ern­ing the use of the tech­nol­ogy. This would most likely be a re­duc­tion in ex­is­ten­tial risk along this vec­tor.

How­ever, it’s also pos­si­ble that fur­ther pub­lic-fac­ing work on aerosol in­jec­tion makes the tech­nol­ogy more dis­cov­er­able, re­veal­ing the tech­nol­ogy to de­ci­sion-mak­ers who were pre­vi­ously ig­no­rant of its promise. Some of these de­ci­sion-mak­ers might be in­clined to pur­sue re­search pro­grams aimed at de­vel­op­ing a strato­spheric aerosol in­jec­tion ca­pa­bil­ity, which would most likely in­crease ex­is­ten­tial risk along this vec­tor.

It is difficult to ar­rive at be­liev­able es­ti­mates of the prob­a­bil­ity that fur­ther work on aerosol in­jec­tion yields an x-risk re­duc­tion, and of the prob­a­bil­ity that fur­ther work yields an x-risk in­crease (though more gran­u­lar map­ping of the game-the­o­retic and strate­gic land­scape here would in­crease the be­liev­abil­ity of our es­ti­mates).

Taken to­gether, then, it’s un­clear whether pub­lic-fac­ing work on aerosol in­jec­tion yields an x-risk re­duc­tion on net. (Note too that keep­ing work on the in­ter­ven­tion se­cret may not straight­for­wardly re­duce x-risk ei­ther, as no se­cret re­search pro­gram can guaran­tee 100% leak pre­ven­tion, and leaked knowl­edge may have a more nega­tive effect than the same knowl­edge made freely available.)

We are, to some ex­tent, clue­less re­gard­ing the net im­pact of fur­ther work on the in­ter­ven­tion.


Where to, from here?

It might be claimed that, al­though we start out be­ing clue­less about the con­se­quences of our ac­tions, we can grow more clue­ful by way of in­ten­tional effort & in­ves­ti­ga­tion. Un­known un­knowns can be un­cov­ered and in­cor­po­rated into ex­pected-value es­ti­mates. Plans can be ad­justed in light of new in­for­ma­tion. Or­ga­ni­za­tions can pivot as their ap­proaches run into un­ex­pected hur­dles.

Clue­less­ness, in other words, might be very tractable.

This is the claim I will con­sider in the next post. My prior is that clue­less­ness is quite in­tractable, and that de­spite best efforts we will re­main clue­less to an im­por­tant ex­tent.

The topic definitely de­serves care­ful ex­am­i­na­tion.

Thanks to mem­bers of the Mather es­say dis­cus­sion group for thought­ful feed­back on drafts of this post. Views ex­pressed above are my own. Cross-posted to my per­sonal blog.


Footnotes

[1]: The term “clue­less­ness” is not my coinage; I am bor­row­ing it from aca­demic philos­o­phy. See in par­tic­u­lar Greaves 2016.

[2]: Indi­rect & long-run con­se­quences are some­times referred to as “flow-through effects,” which, as far as I can tell, does not make a clean dis­tinc­tion be­tween tem­po­rally near effects (“in­di­rect con­se­quences”) and tem­po­rally dis­tant effects (“long-run con­se­quences”). This dis­tinc­tion seems in­ter­est­ing, so I will use “in­di­rect” & “long-run” in fa­vor of “flow-through effects.”

[3]: Thanks to Daniel Ber­man for mak­ing this point.

[4]: More pre­cisely, the brute-good ap­proach as­sumes that in­di­rect & long-run con­se­quences will ei­ther:

  • Be negligible

  • Can­cel each other out via sym­me­try (see foot­note 5)

  • On net point in the same di­rec­tion as the prox­i­mate con­se­quences (see Cot­ton-Bar­ratt 2014: “The up­shot of this is that it is likely in­ter­ven­tions in hu­man welfare, as well as be­ing im­me­di­ately effec­tive to re­lieve suffer­ing and im­prove lives, also tend to have a sig­nifi­cant long-term im­pact. This is of­ten more difficult to mea­sure, but the short-term im­pact can gen­er­ally be used as a rea­son­able proxy.”)

[5]: See Greaves 2016 for dis­cus­sion of the sym­me­try ar­gu­ment, and in par­tic­u­lar p. 9 for dis­cus­sion of why it’s in­suffi­cient for cases of “com­plex clue­less­ness.”