Causal Networks Model I: Introduction & User Guide

This is the user guide for the Causal Net­works Model, cre­ated by CEA sum­mer re­search fel­lows Alex Barry and Denise Melchin. Owen Cot­ton-Bar­ratt pro­vided the origi­nal idea, which was fur­ther de­vel­oped by Max Dal­ton. Both, along with Ste­fan Schu­bert, pro­vided com­ments and feed­back through­out the pro­cess.

This is the be­gin­ning of a mul­ti­part se­ries of posts ex­plain­ing what the model is, how it works and some of the find­ings it leads to. The struc­ture is as fol­lows (links will be added as new posts go up):

  1. In­tro­duc­tion & user guide (this post)

  2. Tech­ni­cal guide (op­tional read­ing, a de­scrip­tion of the tech­ni­cal de­tails of how the model works)

  3. Find­ings (a writeup of all our find­ings)

  4. Cli­mate catas­tro­phe (one par­tic­u­larly ma­jor find­ing)

The struc­ture of this post is as fol­lows:

  1. Summary

  2. Introduction

  3. The qual­i­ta­tive model

  4. The quan­ti­ta­tive model

  5. Limi­ta­tions of the quan­ti­ta­tive model

  6. How to use the quan­ti­ta­tive model

  7. Conclusion

1. Summary

The in­di­rect effects of ac­tions seem to be very im­por­tant to as­sess­ing them, but even for com­mon EA ac­tivi­ties like dona­tions to global poverty char­i­ties, very lit­tle is gen­er­ally known about them. To try to get a han­dle on this, we cre­ated a quan­ti­ta­tive model which at­tempts to cap­ture the in­di­rect effects of donat­ing to com­mon EA causes. The re­sults are very spec­u­la­tive and should mostly be taken as a call for fur­ther re­search, but we have used the model to get some ten­ta­tive re­sults, which will be ex­plored in Part III.

Our model de­pends to a high de­gree on ex­tremely un­cer­tain in­for­ma­tion (such as the prob­a­bil­ity of ex­is­ten­tial risks) about which there is likely to be sig­nifi­cant dis­agree­ment. There­fore, we cre­ated a user tool for our model which al­lows the user to eas­ily al­ter the most con­tentious val­ues, to see how the out­come is af­fected.

2. Introduction

When try­ing to do the most good, the in­di­rect effects of one’s ac­tions are im­por­tant, and there have been many de­bates in the EA com­mu­nity about in­di­rect effects (Do hu­man in­ter­ven­tions have more pos­i­tive in­di­rect effects than an­i­mal in­ter­ven­tions? Do the in­di­rect effects of dona­tions to GiveWell char­i­ties dwarf their di­rect effects?). How­ever, our cur­rent knowl­edge of the in­di­rect effects of com­mon EA ac­tions and how to han­dle them is still very limited.

For ex­am­ple, con­sider the im­pact of GiveDirectly (which al­lows money to be trans­ferred to some of the world’s poor­est peo­ple) on pop­u­la­tion lev­els. While donat­ing via GiveDirectly doesn’t af­fect pop­u­la­tions lev­els di­rectly, it is likely to do so in­di­rectly by in­creas­ing GDP in the least de­vel­oped coun­tries, which tends to lead to fewer births.

In an effort to bet­ter un­der­stand the sig­nifi­cance of in­di­rect effects, we have cre­ated a quan­ti­ta­tive model to calcu­late the rough or­ders of mag­ni­tude of the likely in­di­rect effects of fund­ing com­mon EA causes. We also look at how the re­sults are af­fected by differ­ent start­ing as­sump­tions.

Our model is very sim­plified due to time con­straints, and it does not ac­count for a num­ber of effects which are likely to be im­por­tant. How­ever, we still think its re­sults are use­ful for point­ing to ar­eas for fur­ther re­search.

3. The qual­i­ta­tive model

We be­gan by cre­at­ing a flowchart (‘the qual­i­ta­tive model’) show­ing the path of cause-and-effect be­tween differ­ent pa­ram­e­ters. Each node rep­re­sents a pa­ram­e­ter, and nodes are con­nected to one an­other via ar­rows if they af­fect each other.

So for an­other ex­am­ple, the node ‘AMF Fund­ing by the EA Com­mu­nity’ is con­nected with the node ‘QALYs saved’ to show that chang­ing AMF fund­ing will change the to­tal num­ber of QALYs saved by the EA com­mu­nity.

The whole qual­i­ta­tive ver­sion of the model, which con­sists of all the effects we have con­sid­ered and forms the ba­sis of the quan­ti­ta­tive model, is available here. We ad­vise you to take a look at it for a bet­ter un­der­stand­ing of the model and the fol­low­ing sec­tions.

The nodes in green (e.g. char­i­ties recom­mended by GiveWell) are the ones we, the EA Com­mu­nity, can di­rectly af­fect, such as through fund­ing. The trans­par­ent nodes act as ‘buck­ets’ (e.g. ‘Global Poverty fund­ing’) for those more spe­cific fund­ing pur­suits. The grey nodes are the out­puts peo­ple typ­i­cally di­rectly care about (e.g. ‘QALYs saved’). Fi­nally, the red nodes are in­ter­me­di­ate nodes (e.g. ‘GDP per cap­ita in least de­vel­oped coun­tries’) which come be­tween the fund­ing we can af­fect and the out­puts we di­rectly care about.

As you can see, our model is fo­cussed on global poverty (through the tra­di­tional GiveWell recom­mended char­i­ties), an­i­mal suffer­ing (which re­sult from land farm an­i­mal con­sump­tion) and global catas­trophic and ex­is­ten­tial risks be­fore 2050. It tries to mea­sure the effects of in­ter­ven­tions in terms of QALYs, re­ported (hu­man) well-be­ing, hu­man pop­u­la­tion lev­els and land an­i­mal welfare and pop­u­la­tion.

4. The quan­ti­ta­tive model

We then de­vel­oped a quan­ti­ta­tive model based on the qual­i­ta­tive model. Us­ing this we can be­gin to an­swer ques­tions like: How many more QALYs will be saved by in­creas­ing AMF fund­ing by one mil­lion dol­lars? We built on the qual­i­ta­tive model by es­ti­mat­ing how much a change to one node will af­fect the down­stream nodes.

In the above ex­am­ple of the im­pact of AMF Fund­ing on QALYs, we could use num­bers es­ti­mated by GiveWell. In other cases, es­ti­mat­ing im­pact was much less straight­for­ward. In cer­tain cases we re­al­ised that the num­bers given were likely to be con­tentious, and this led us to de­velop the cus­tomis­able ver­sion of the quan­ti­ta­tive model (see sec­tion 6 be­low).

We tried to quan­tify the effect of each node on all the nodes it is con­nected to. We did this in two differ­ent ways: us­ing a differ­en­tial and us­ing an elas­tic­ity. By differ­en­tial, we mean the effect of in­creas­ing the pa­ram­e­ter of a node by one unit. By elas­tic­ity, we mean the effect of in­creas­ing the pa­ram­e­ter of a node by 1%. In our model we there­fore de­cided whether each node would be ‘differ­en­tial’ or ‘elas­tic’.

Go­ing back to our pre­vi­ous ex­am­ple, for AMF fund­ing we used a differ­en­tial. This means that we con­sid­ered about how in­creas­ing the fund­ing of AMF by $1 mil­lion would im­pact the num­ber of QALYs saved. We could in­stead have mod­el­led the im­pact via an elas­tic­ity: What hap­pens if we in­crease AMF fund­ing by 1%?

  • A more de­tailed ex­pla­na­tion of this pro­cess can be found in the tech­ni­cal guide, which will be Part II of this se­ries.

  • A list of the differ­en­tials and elas­tic­i­ties used, along with the rea­son­ing be­hind them, can be found in this chart, which an­swers ques­tions like ‘How much does in­creas­ing GDP in the least de­vel­oped coun­tries im­pact pop­u­la­tion lev­els?’ and ‘How much does rais­ing global egg con­sump­tion in­crease farmed chicken pop­u­la­tion lev­els?’

  • A list of the static in­puts used, along with the rea­son­ing be­hind them, can be found in this chart, which an­swers ques­tions like ‘What do we think are the least de­vel­oped coun­tries?’ and ‘How much money has ACE moved in the past year?’

We have writ­ten up our own find­ings in Part III of the se­ries and con­sid­ered how much they differ given differ­ent rea­son­able in­puts for the con­tentious vari­ables.

5. Limi­ta­tions of the quan­ti­ta­tive model

Many of the re­sults of our model de­pend to a large ex­tent on par­tic­u­lar vari­ables with val­ues about which we have very lit­tle in­for­ma­tion (two par­tic­u­larly un­cer­tain sets of val­ues are those re­lated to ex­is­ten­tial risk and to the chance of cre­at­ing cost-effec­tive cul­tured (‘clean’) meat). Be­cause of this, and be­cause of the gen­eral limi­ta­tions of the model, any find­ings should be taken as in­vi­ta­tions to fur­ther re­search, rather than as con­crete pro­nounce­ments of effec­tive­ness. (In the past we have found a fair num­ber of mis­takes in­volv­ing num­bers be­ing wrong by a few or­ders of mag­ni­tude!)

The model does not ex­plic­itly model time pass­ing. In­stead it takes as in­puts in­creases of fund­ing for differ­ent EA cause ar­eas in 2017, calcu­lates var­i­ous in­ter­me­di­ary effects (in­clud­ing sim­ply mod­el­led feed­back loops) and then out­puts effects in 2050. We chose 2050 as the end point be­cause of the difficulty of ex­trap­o­lat­ing es­ti­mates much fur­ther into the fu­ture. The model is there­fore not very use­ful for con­sid­er­ing most long-term fu­ture effects, al­though it does out­put the prob­a­bil­ity of global catas­trophic risks and ex­is­ten­tial risks oc­cur­ring be­fore 2050.

The eth­i­cal the­o­ries con­sid­ered are also con­strained, with out­puts only be­ing suffi­cient to make crude short-term to­tal util­i­tar­ian calcu­la­tions (which are ex­plained in sec­tion 6), es­ti­mate QALYs saved as an ap­prox­i­ma­tion of value ac­cord­ing to some forms of per­son-af­fect­ing views, and set out ex­is­ten­tial and global catas­trophic risk in the time frame con­sid­ered (2017-2050).

There are also more gen­eral ar­gu­ments to be made against tak­ing cost-effec­tive­ness es­ti­mates too liter­ally, as laid out in this clas­sic GiveWell post, which you might want to keep in mind.

6. How to use the quan­ti­ta­tive model

As pre­vi­ously stated, along with the model we have cre­ated a user tool that lets you in­crease var­i­ous ar­eas of EA fund­ing and cus­tomise the con­tentious vari­ables. You can then see what differ­ence it would make for out­puts like pop­u­la­tion lev­els or QALYs saved.

To use the tool, make a copy of it (click ‘File’ in the up­per left cor­ner and then ‘Create Copy’). Make sure you are on the ‘User in­ter­face’ page on the bot­tom of the screen.

The Im­por­tant Vari­ables sec­tion (in yel­low) con­sists of in­puts re­gard­ing which (i) differ­ent val­ues have dis­pro­por­tionate effects on the model’s out­put and (ii) differ­ent peo­ple could have strongly di­ver­gent opinions. They are cur­rently filled in with de­fault val­ues given by us, al­though they do not nec­es­sar­ily rep­re­sent our opinion. (You can find the rea­son­ing for our de­fault ‘im­por­tant vari­ables’ here).

The Changes In Fund­ing sec­tion (in green) con­tains the model’s in­puts, al­low­ing you to com­pare how fund­ing differ­ent causes has differ­ent effects. Note the fi­nal model is lin­ear in re­spect to these in­puts, so fund­ing by 10 mil­lion will not pro­duce any in­ter­est­ing effects not seen by fund­ing it by 1 mil­lion. In the real world these elas­tic­ity and differ­en­tial func­tions have diminish­ing re­turns: for in­stance, in­creas­ing fund­ing by 100% of­ten will not change out­puts by 100 times as much as in­creas­ing fund­ing by 1%. Keep this in mind if you use large num­bers as in­puts.

The Out­put sec­tion (in blue) con­tains the raw out­puts of the model (i.e. the grey nodes in the qual­i­ta­tive model) which peo­ple are likely to care about most.

The Mo­ral­ity In­puts sec­tion (in pink) lets you define a cus­tom moral weight­ing, for ex­am­ple at what point you think a hu­man life is not worth liv­ing ac­cord­ing to the Cantril lad­der (a tool to eval­u­ate life satis­fac­tion on a scale from 0 to 10). Th­ese weight­ings af­fect the Mo­ral Out­puts.

The Mo­ral­ity Out­puts sec­tion (in pur­ple) then shows the effects of changes in fund­ing on hu­man and an­i­mal welfare lev­els. The Mo­ral­ity Out­puts start­ing with ‘To­tal []’ are calcu­lated by mul­ti­ply­ing av­er­age welfare (nor­mal­ised to a −1 to 1 scale) by pop­u­la­tion, then mul­ti­ply­ing by a sen­tience mod­ifier (if ap­pli­ca­ble) and sum­ming over the differ­ent species in­cluded (if look­ing at non-hu­man an­i­mals). A value of 1 is meant to rep­re­sent a coun­ter­fac­tual year of hu­man life at 1010 satis­fac­tion.

To­tal hu­man wellbe­ing is rep­re­sented by the fol­low­ing for­mula (again by tak­ing the differ­ence be­tween the new and the coun­ter­fac­tual fund­ing):

(Ac­tual global av­er­age wellbe­ing mea­sured by Cantril Scale—The min­i­mum step on the Cantril Scale for a life to be worth liv­ing) /​ 5 * Pop­u­la­tion level.

So if you think some hu­mans have lives not worth liv­ing, you can set the num­ber ac­cord­ingly and have their lives traded off against hu­man lives which are worth liv­ing. There’s plenty of data on the Cantril scale from Gal­lup which you are ad­vised to look over be­fore us­ing the tool.

The differ­ent an­i­mal weight­ings are ‘Brain’ (which val­ues an­i­mals by the num­ber of neu­rons they have rel­a­tive to a hu­man), ‘Brian weights’ (which are based off the weights given by Brian To­masik) and a cus­tom weight­ing which you can spec­ify in the ‘Mo­ral­ity in­put’ sec­tion.

For non-hu­man an­i­mal lives the nor­mal­ised av­er­age welfare works a bit differ­ently. We’ve as­sumed a 0-10 scale, with 5 be­ing neu­tral. You can change where on this scale the qual­ity of life for differ­ent an­i­mals stands in the ‘Im­por­tant Vari­ables’ sec­tion.

You can use the ‘Re­set val­ues’ but­ton to change the num­bers back to the de­fault val­ues.

You might dis­agree with some con­nec­tions or in­puts which aren’t cus­tomis­able in the user tool. If so you can change or re­move con­nec­tions and nodes us­ing our Man­ual In­put sheet (which is some­what less user-friendly). A de­tailed ex­pla­na­tion on how to this sheet can be found in our tech­ni­cal guide, which is Part II of the se­ries.

(You can also see most of the above ex­pla­na­tion in the ‘Guide’ tab in the Google Doc.)

7. Conclusion

Have fun, but stay safe. Don’t in­ter­pret the model’s re­sults liter­ally. Take a look at our rea­son­ing for our static in­puts, and elas­tic­i­ties and differ­en­tials and let us know if you catch any er­rors.

Our next post is the tech­ni­cal guide to the model which con­sti­tutes Part II of the se­ries. You can find it here.

Feel free to ask ques­tions in the com­ment sec­tion or email us (denisemelchin@gmail.com or alexbarry40@gmail.com).