How to get a new cause into EA

Effec­tive al­tru­ism has had three main di­rect broad causes (global poverty, an­i­mal rights, and far fu­ture), for quite some time. I have of­ten heard peo­ple worry that it’s too hard for a new cause to be ac­cepted by the effec­tive al­tru­ism move­ment. Si­mul­ta­neously, I do not re­ally see peo­ple pre­sent­ing new cause ar­eas in the way I think would be the most likely for many EAs to con­sider and take se­ri­ously. I wanted to make a quick refer­ence post as a bet­ter way for peo­ple to pro­pose new cause/​in­ter­ven­tion ar­eas they might see as promis­ing. Much of this ad­vice could also be used to pre­sent a new spe­cific char­ity within a known high im­pact cause area.

1) Be ready to put in some real time

2) Have a spe­cific in­ter­ven­tion within the broad cause to compare

3) Com­pare it to the most rele­vantly com­pa­rable top cause.

4) Com­pare it numerically

5) Fo­cus on one change at a time

6) Use equal rigour

7) Have a sum­mary at the top

1) Be ready to put in some time

Com­par­ing differ­ent cause ar­eas is hard and takes a good amount of re­search time. In the effec­tive al­tru­ism move­ment there are many peo­ple and or­ga­ni­za­tions who work full time com­par­ing in­ter­ven­tions within a sin­gle cause, and gen­er­ally it’s much harder to com­pare in­ter­ven­tions across cause ar­eas. Gen­er­ally it’s go­ing to take some time to effec­tively ar­tic­u­late a new cause area, par­tic­u­larly if the EA move­ment has not spent much col­lec­tive time con­sid­er­ing it. It’s not ex­pected, or even pos­si­ble, that one per­son does all the re­search re­quired in a whole cause area, but if you think a cause area is com­pet­i­tive, you likely will have to be the first one to do some of the ini­tial re­search and start to build a case for why oth­ers should con­sider it. To start to get enough rea­son­ing for EAs to re­ally con­sider a cause it has to start to stand out among the hun­dreds of other causes that could be high im­pact to work on.

2) Have a spe­cific in­ter­ven­tion within the broad cause to compare

As men­tioned above, com­par­ing whole cause ar­eas is hard. In many ways it’s also not the point. If cause area A is more effec­tive than global poverty on av­er­age but all the spe­cific in­ter­ven­tions in it can not com­pete with the best global poverty char­i­ties (e.g., AMF), it will still not be a great tar­get to put re­sources to­wards. Ad­di­tion­ally, it’s much harder to get into the de­tails and com­par­i­sons of a whole cause area which will of­ten con­tain nu­mer­ous differ­ent in­ter­ven­tions. The best way around these con­cerns I have seen is to drill down on an ex­am­ple of a highly promis­ing in­ter­ven­tion. For ex­am­ple, if you are mak­ing the case that men­tal health in the third world is a high im­pact cause area, look deeply into a spe­cific ex­am­ple, like CBT cell phone ap­pli­ca­tions. With a more spe­cific in­ter­ven­tion it will be easy to fact check as well as nu­mer­i­cally com­pare it to the other top in­ter­ven­tions EAs cur­rently sup­port.

3) Com­pare it to the most rele­vantly com­pa­rable top cause.

A huge num­ber of causes that are brought up are not di­rectly com­pared to the most rele­vant com­pa­rable cause area. If some­one is mak­ing a case for pos­i­tive psy­chol­ogy and men­tal health, the nat­u­ral com­par­i­son is to the GiveWell top char­i­ties. If it’s about wild an­i­mal suffer­ing, it needs to be com­pared to farm an­i­mal in­ter­ven­tions, and if it’s about bio-risk, it could be com­pared to AI. If some­one is sold on far fu­ture and pitch­ing a new cause area within it, mak­ing gen­er­al­ized ar­gu­ments about the far fu­ture be­ing bet­ter than AMF is not go­ing to do much work con­vinc­ing peo­ple. Most EAs will have already heard AMF vs. AI com­par­i­son and those sorts of ar­gu­ments will not be new or per­sua­sive to AMF sup­port­ers and do noth­ing to com­pare the cause to its real com­pe­ti­tion, AI. Some cause ar­eas might be amenable to mul­ti­ple com­par­i­sons (bio risk could be made as a far fu­ture case com­pared to AI or a di­rect DALYs im­proved com­pared to AMF), but in any case, try to com­pare it to the cause that con­tains the sorts of peo­ple who are most likely to find your new pro­posed cause high im­pact.

4) Com­pare it numerically

Effec­tive al­tru­ists are a quan­ti­ta­tive bunch and nu­mer­i­cal com­par­i­sons are ba­si­cally nec­es­sary for se­ri­ously com­par­ing the good done in differ­ent cause ar­eas and in­ter­ven­tions. There are a lot of differ­ent ways to do this, but a safe bet would be a cost-effec­tive­ness anal­y­sis in a spread­sheet or guessti­mate model. As men­tioned above, de­pend­ing on the most rele­vant cause you are com­par­ing to, you will want to gen­er­ally model things in that con­text. That would gen­er­ally mean DALYs or cost per life saved for global poverty, an­i­mal DALYs for an­i­mals, or per­cent chance of af­fect­ing long term so­ciety for far fu­ture. Cross-com­par­ing met­rics is a use­ful blog post in and of it­self, but it’s not go­ing to be best pre­sented while si­mul­ta­neously pre­sent­ing a new cause area. This leads well into my next point.

5) Fo­cus on one change at a time

Often when peo­ple pre­sent new cause ar­eas they come with a lot of other pro­posed changes. They could be eth­i­cal (e.g. we should have X view on pop­u­la­tion ethics), epistemic (e.g. we should value his­tor­i­cal ev­i­dence more) or lo­gis­ti­cal (e.g. we should use this CEA soft­ware even though it’s harder to read for be­gin­ners). As men­tioned above, all of these might be worth­while changes for the EA move­ment to make, but if it’s con­flated with a sug­gested cause, gen­er­ally I have seen peo­ple dis­miss the cause be­cause of the other as­so­ci­ated claims with it. For ex­am­ple, “Only nega­tive lean­ing util­i­tar­i­ans think cause X is im­por­tant, and I am not nega­tive lean­ing.” This of­ten hap­pens with causes that have a very strong case even with fairly tra­di­tional EA stan­dards of ev­i­dence/​ethics etc. If the cause area as a whole re­lies on an eth­i­cal or other as­sump­tion to be com­pet­i­tive, I would gen­er­ally recom­mend writ­ing about that speci­fi­cally be­fore pitch­ing a cause or in­ter­ven­tion that is re­li­ant on it.

6) Use equal rigour

Not only does a new cause need to be com­pared—it ideally needs to be com­pared with equal rigour, at least as much as is pos­si­ble. It’s easy to point out flaws in one char­ity or cause area and only high­light the benefits of an­other, but with­out com­par­ing them with the same level of rigour, the num­bers will be use­less next to each other. To use a clear ex­am­ple of this, I have seen bus ads that claim to save a life for $1 and yet I still donate to GW char­i­ties which claim to save a life for $3000. This is mainly be­cause the way the calcu­la­tion was done was com­pletely differ­ent, even if they were both put into a dol­lar per life saved met­ric at the end. I ex­pect that if the $1 char­ity was com­pared us­ing GiveWell’s method­ol­ogy, its cost-effec­tive­ness would rapidly de­crease. Like­wise, if a cause area is pre­sented with very op­ti­mistic es­ti­mates, it’s hard to take the endline con­clu­sion se­ri­ously—much like the bus ad.

This is an easy one to say but very hard to do in prac­tice. The best way I have found is to try to think, “How would GiveWell (or ACE, etc) model this?”, and try to fol­low those prin­ci­ples. Another great way is to get an EA or two who you re­spect and is not sold on your cause area to take a look over your num­bers and sug­gest changes. Peo­ple will com­ment, sug­gest­ing changes on al­most any model, but if it’s too far off a re­al­is­tic num­ber, many peo­ple just will not bother with com­ment­ing on all the things that need changes. Lastly, an­other thing to keep in mind is that of­ten lo­gis­ti­cal costs are easy to for­get. X product may only cost $1,000 and save a life of DALYs, but what about ship­ping costs, staff over­head, gov­ern­ment per­mis­sions, etc? Un­der­es­ti­mat­ing these of­ten sig­nifi­cant costs are a com­mon rea­son why CEAs get worse as peo­ple in­ves­ti­gate deeper.

7) Have a sum­mary at the top of a more in depth review

Par­tic­u­larly for long posts, hav­ing a sum­mary at the top with the strongest ar­gu­ments and endline con­clu­sions will make it a lot eas­ier for peo­ple to know if they should com­mit to read­ing the whole post or not, as well as al­low en­gage­ment from peo­ple who do not have time to dig into all the de­tails of the full post.

Why bother pitch­ing a new cause within EA?

Fol­low­ing all these steps is a lot of work and that en­ergy and time could be be­ing put into fur­ther­ing the cause di­rectly or earn­ing money and donat­ing to it. De­spite this, I think in al­most all cases it is worth pre­sent­ing a new cause area to EA if it’s pos­si­ble it could be com­pet­i­tive. The EA com­mu­nity di­rects large por­tions of money both di­rectly through earn­ing to give and in­di­rectly from in­fluenc­ing large foun­da­tions. His­tor­i­cally, very un­der­funded causes like AI x-risk and farm an­i­mal rights have both mas­sively benefited from EA fi­nan­cial sup­port. In ad­di­tion, the EA move­ment di­rects tal­ent to­wards high im­pact cause ar­eas, new char­i­ties are founded, Ivy League grad­u­ates ap­ply for jobs and vol­un­teer re­search is done in ar­eas that are seen as high im­pact. Even if a well writ­ten cause re­port takes 20 hours or more to do the benefits can be much larger if even a small per­centage of the EA com­mu­nity is con­vinced the cause area is worth­while.