EA Survey 2019 Series: Cause Prioritization

Summary

  • Global Poverty re­mains the most pop­u­lar sin­gle cause in our sam­ple as a whole.

  • When pressed to choose only one of the tra­di­tional broad cause ar­eas of EA (Global Poverty, An­i­mal Welfare, Meta, Long Term Fu­ture, Other) the Long Term Fu­ture/​Catas­trophic and Ex­is­ten­tial Risk Re­duc­tion is the most pop­u­lar (41%).

  • 42% of EAs have changed their cause area fo­cus since they joined the move­ment.

  • A ma­jor­ity (57%) of those who changed cause moved away from Global Poverty, and a ma­jor­ity (54%) moved to­wards the Long Term Fu­ture/​Catas­trophic and Ex­is­ten­tial Risk Re­duc­tion.

  • There are many differ­ences in cause pri­ori­ti­za­tion across groups. On the whole, more in­volved groups ap­pear to pri­ori­tise Global Poverty and Cli­mate Change less and longter­mist causes more.

  • One should not in­fer too large a re­la­tion­ship be­tween lev­els of in­volve­ment in EA and cause pri­ori­ti­za­tion due to the ab­sence of perfect in­stru­ments for mea­sur­ing cause pri­ori­ti­za­tion and be­cause some in­fluences on cause pri­ori­ti­za­tion may not be cap­tured in the sur­vey data.

This post is the sec­ond in Re­think Char­ity’s se­ries on the EA Sur­vey 2019, which pro­vides an an­nual snap­shot of the EA com­mu­nity. In this re­port, we ex­plore how EAs in the sur­vey pri­ori­tize cause ar­eas, how they have changed cause prefer­ences since they joined EA, and differ­ences ac­cord­ing to de­mo­graph­ics and group mem­ber­ship. In the first post we ex­plored the char­ac­ter­is­tics and ten­den­cies of EAs cap­tured in the sur­vey. In fu­ture posts, we will ex­plore how these EAs first heard of and got in­volved in effec­tive al­tru­ism, their dona­tion pat­terns, ge­o­graphic dis­tri­bu­tion and their ca­reer paths, among oth­ers. We are open to re­quests from the com­mu­nity of analy­ses they would like to see done, albeit with no guaran­tee that we can do them.

Introduction

Peter Singer re­cently cau­tioned against EA hav­ing too nar­row a fo­cus, speci­fi­cally high­light­ing ex­is­ten­tial risk as the pub­lic face of EA be­ing a threat to a broad-based move­ment. How many and which causes to pri­ori­tize is a con­tinual topic of dis­cus­sion in the EA com­mu­nity. On the EA Fo­rum there have been many dis­cus­sions about the rel­a­tive cost-effec­tive­ness of global poverty com­pared to an­i­mal welfare and cli­mate change, the rel­a­tive em­pha­sis on meta ver­sus di­rect work,longterm­sim, and the search for Cause X, among oth­ers. In this year’s sur­vey data we can ex­plore how much pri­or­ity EAs think a given cause should re­ceive and how this differs by lev­els of in­volve­ment in EA, if at all. This anal­y­sis draws mainly on three key ques­tions in the sur­vey: fine-grained cause ar­eas, broad cause ar­eas, and cause switch­ing. Th­ese ques­tions are in­cluded in the ap­pendix.

Top causes

Fine-grained cause areas

The sim­plest (though crud­est) way to analyse these re­sponses is to look at the share of “top pri­or­ity” re­sponses for each of the fine-grained cause ar­eas. Peo­ple could rate more than one area as the “top pri­or­ity”. This means the to­tal num­ber of re­sponses may ex­ceed the num­ber of re­spon­dents, but con­cep­tu­ally it is rea­son­able to think EAs can hold mul­ti­ple top pri­ori­ties and en­gage in wor­ld­view di­ver­sifi­ca­tion. In the figure be­low we show what per­cent of all “top pri­or­ity” re­sponses went to a given cause area. 1,687 re­spon­dents chose at least 1 top pri­or­ity cause and 278 (16.5%) of these se­lected more than one top pri­or­ity cause.

N.B. images can be viewed in full size if opened in a new tab.

As in pre­vi­ous years (1,2), Global Poverty was the largest sin­gle cause out of the op­tions pre­sented. Cli­mate Change re­ceived the sec­ond largest num­ber of top pri­or­ity re­sponses, fol­lowed by AI Risk. This is a re­ver­sal from 2018’s re­sults but the differ­ence be­tween Cli­mate Change and AI Risk pri­ori­ti­za­tion does not seem very sub­stan­tial. Without lon­gi­tu­di­nal panel data it is un­clear if this is due to un­der/​over­sam­pling,vet­eran EAs chang­ing their prefer­ences or a change in the com­po­si­tion of prefer­ences among EAs due to fac­tors like re­cruit­ing more EAs who pri­ori­tize Cli­mate Change. Men­tal Health and Nu­clear Se­cu­rity at­tracted the fewest re­sponses in­di­cat­ing that they should be the top pri­or­ity, as in 2018’s sur­vey.

De­spite some limi­ta­tions we noted last year, we again com­bine AI Risk, Biose­cu­rity, Nu­clear Se­cu­rity and Ex­is­ten­tial Risk (other) to­gether into one Long Term Fu­ture cat­e­gory. When do­ing so, the gap be­tween Global Poverty and Long Term Fu­ture type causes is sub­stan­tially re­duced, such that Long Term Fu­ture (com­bined) is the sec­ond most pop­u­lar top pri­or­ity cause.[1]

Broad cause areas

Fol­low­ing through on our stated in­ten­tion last year to get bet­ter in­sight into the is­sues aris­ing from offer­ing more fine-grained cause op­tions, we asked about Long Term Fu­ture cause pri­ori­ti­za­tion as whole, alongside other tra­di­tional broad cause area op­tions (Global Poverty, An­i­mal Welfare, Meta, Other). When pressed to choose only one top cause, a plu­ral­ity (40.8%) chose the Long Term Fu­ture/​Catas­trophic and Ex­is­ten­tial Risk Re­duc­tion (LTF).[2]

We can com­pare the split be­tween the fine-grained and forced choice ques­tions. We look at how those who se­lected a “top pri­or­ity” cause when able to choose mul­ti­ple fine-grained causes changed their se­lec­tion when they were forced to choose just one of the broad cause ar­eas.

As might be ex­pected, the ma­jor­ity (87%) of those who se­lected “top pri­or­ity” for Global Poverty then se­lected Global Poverty in the forced choice ques­tion, 80% of those prefer­ring An­i­mal Welfare chose An­i­mal Welfare, and 60% of Meta top pri­or­ity sup­port­ers chose Meta. Those who gave top pri­or­ity to Cause Pri­ori­ti­za­tion broke al­most equally to­wards LTF (35%) and Meta (36%) and those pri­ori­tis­ing Men­tal Health favoured Global Poverty the most (37%). Plu­ral­ities of the other causes broke to­wards LTF: AI Risk 85%, Other X-risk 77%, Nu­clear Se­cu­rity 60%, Biose­cu­rity 55%,Cli­mate Change 46%,Other Cause 38%, Im­prov­ing Ra­tion­al­ity/​De­ci­sion Mak­ing/​Science 33%.

This is more clearly seen in the rib­bon plot where only the largest num­ber of re­sponses mov­ing from each cat­e­gory is high­lighted in or­ange.[3] The ma­jor­ity of those who gave Global Poverty “top pri­or­ity” (on the left) also then picked Global Poverty when forced to choose only one cause (on the right). One can also clearly see the flow from many of the other cause ar­eas to­wards Long Term Fu­ture.

One might have ex­pected that those who se­lected one of the fine-grained top pri­or­ity causes not speci­fi­cally rep­re­sented by one of the broad cause area la­bels (such as Men­tal Health) would have then se­lected “Other”. How­ever, no plu­ral­ity of any cause area opted for “other” when forced to choose only one. This could im­ply la­tent sec­ond choice prefer­ences, or sim­ply that re­spon­dents slot­ted their preferred spe­cific cause into one of these broad cat­e­gories (Cli­mate Change into Long Term Fu­ture or Men­tal Health into Global Poverty). This may also be helpful in un­der­stand­ing who EAs per­ceive the pri­mary benefi­cia­ries of a cause are; those al­ive to­day or in the far fu­ture. For ex­am­ple, it is also note­wor­thy that a plu­ral­ity (near­ing a ma­jor­ity at 46%) of those who chose Cli­mate Change then opted for Long Term Fu­ture/​Catas­trophic and Ex­is­ten­tial Risk Re­duc­tion.

Full range of responses

Look­ing at only “top pri­or­ity” cause se­lec­tions misses out on a large chunk of the data and opinion within the EA sur­vey. The full range of re­sponses sug­gests a more even dis­tri­bu­tion of in­ter­est in causes across EAs.

In the lik­ert graph be­low, causes are listed in de­scend­ing or­der based on the av­er­age of their full scale rat­ing.[4] Most listed causes re­ceived sub­stan­tial sup­port, with no cause re­ceiv­ing fewer than 59% of re­spon­dents sug­gest­ing that it should re­ceive at least “sig­nifi­cant re­sources”. The top five causes, as ranked by EAs in the sur­vey, are Global Poverty, Cause Pri­ori­ti­za­tion, AI Risk, Cli­mate Change, and Biose­cu­rity. Th­ese all re­ceived 44% or more of re­sponses in­di­cat­ing that they are near-top or top pri­ori­ties. It ap­pears then that there is con­sid­er­able over­lap in the top cause ar­eas pri­ori­tized by EAs in the sur­vey and those ar­eas high­lighted by a re­cent sur­vey of some EA or­ga­ni­za­tion lead­ers [5] and 80,000 Hours’ prob­lem pro­files page.[6]

Most causes were judged by ~34-49% of re­spon­dents as be­ing the “top pri­or­ity” or “near the top pri­or­ity”. The ex­cep­tions were Nu­clear Se­cu­rity and Men­tal Health, which re­ceived only ~22-29% of re­spon­dents rank­ing each of them that highly, and Global Poverty, which 62% of re­spon­dents rated as a top or near top pri­or­ity.

Write-in cause areas

The fine-grained cause rat­ing ques­tion gave re­spon­dents the op­tion to write in an “other cause”. ~30% of these were re­it­er­a­tions of cause ar­eas already listed(the most com­mon re­sponses fal­ling un­der im­prov­ing ra­tio­nal­ity/​de­ci­sion mak­ing). Some of these sought to make clearer dis­tinc­tions within these broader cat­e­gories. For ex­am­ple, 10% de­mar­cated Global Health from Global Poverty and 4% wanted to spec­ify Wild An­i­mal Suffer­ing as dis­tinct from An­i­mal Welfare/​Rights. The other most pop­u­lar causes men­tioned were S-Risks (7.8%), in­ter­na­tional re­la­tions or great power con­flict (6.5%), and EA move­ment build­ing (5.5%). Much smaller shares men­tioned eco­nomic growth, anti-ag­ing, en­vi­ron­men­tal degra­da­tion, and in­equal­ity (racial or gen­der).

Cause prefer­ence change

This year we asked re­spon­dents whether they pri­ori­tize a differ­ent cause now to when they first be­came in­volved with EA. Of the 1,944 who an­swered this ques­tion, 42% had changed their cause area fo­cus.

Those who changed cause had been in EA on av­er­age 3.6 years, com­pared to 3.2 years for those who didn’t change. Com­pared to those who didn’t change, those who changed cause are 4 years younger or were 4 years younger when they joined EA (see table be­low).[7] Most EAs did not change cause area ir­re­spec­tive of the year they joined.[8]

784 re­spon­dents de­tailed what cause they origi­nally pri­ori­tised that is differ­ent from what they now pri­ori­tize. The most com­mon cause that a re­spon­dent re­ported chang­ing from was Global Poverty (57% of hand-coded re­sponses), fol­lowed by Cli­mate Change (11%). The ques­tion did not ask re­spon­dents to spec­ify which cause they pri­ori­tise to­day (though some did write this). In our data, there are two po­ten­tial ways to see how re­spon­dents’ cause prefer­ences have changed: com­par­ing to “top pri­or­ity” causes or com­par­ing to the forced choice cause se­lec­tion. An­a­lyz­ing this data and re­spon­dents’ cur­rent cause prefer­ences ap­pears to con­firm a pre­vi­ously iden­ti­fied trend of EAs mov­ing away from Global Poverty and to­wards AI.

In the first ap­proach, we com­pare re­ported past fo­cus area to the cause that was given “top pri­or­ity” in the scaled ques­tion (ex­clud­ing 16.5% who gave more than one top pri­or­ity). 572 (71%) of the 810 who re­ported chang­ing their cause also gave only one top pri­or­ity cause and re­ported in de­tail their pre­vi­ous top cause. The pic­ture is mainly one of those en­ter­ing EA pri­ori­tiz­ing Global Poverty and then spread­ing out to one of the other cause ar­eas in a rather diffuse man­ner. Re­duc­ing risks from AI is the most com­mon top pri­or­ity to­day, but by no means the ma­jor­ity, among those who pri­ori­tized Global Poverty or An­i­mal Welfare when they joined EA. Due to the low num­ber of re­spon­dents who se­lected many of the other fine-grained cause op­tions as their pre­vi­ous fo­cus area, many are bunched to­gether in the rib­bon plot be­low as “all other causes”.

In the sec­ond ap­proach, we look at the forced choice ques­tions. Note this does not nec­es­sar­ily re­flect what a given re­spon­dent does pri­ori­tize now, but only what they would if forced to pick one from the given list. 84% of those who chose Global Poverty in the forced choice ques­tion had not changed cause area at all. A ma­jor­ity of those who chose Long Term Fu­ture (55%) or Meta (65%) for the forced choice ques­tion have changed cause since they joined. 53% of those who changed cause chose LTF in the forced choice ques­tion and among the causes EAs pri­ori­tised when they joined, plu­ral­ities chose LTF when forced. Due to the low num­ber of re­spon­dents who se­lected many of the fine-grained cause op­tions, many are bunched to­gether in the rib­bon plot be­low as “all other causes”.

The sur­vey in­cluded a ques­tion ask­ing EAs to re­port their level of en­gage­ment in EA on a scale of No En­gage­ment to High En­gage­ment.[9] Those who claim to be highly en­gaged in EA ap­pear more likely to have switched causes than those less en­gaged, and mem­bers of the EA groups we asked about tend to re­port higher rates of cause chang­ing than non-mem­bers.[10]

Group membership

We ex­am­ined differ­ences in cause pri­ori­ti­za­tion across var­i­ous differ­ent groups within our sam­ple. Over­all, a greater prefer­ence for typ­i­cal longter­mist causes and weaker prefer­ence for Global Poverty and Cli­mate Change ap­pears among re­spon­dents with in­di­ca­tors of be­ing more in­volved in EA. We at­tempt to con­trol for con­founders in the re­gres­sions in the Pre­dic­tors sec­tion be­low, which are of­ten more sen­si­tive than sim­ple bi­vari­ate tests (lump­ing the data into groups and test­ing the differ­ence in means).

Given that the EA Sur­vey ap­pears to re­ceive re­sponses from a fairly large num­ber of peo­ple who are fairly new to the move­ment and/​or not in­volved in many core EA groups or ac­tivi­ties, we again think it is in­ter­est­ing to ex­am­ine the re­sponses of a nar­rower pop­u­la­tion who were plau­si­bly more ac­tively en­gaged and in­formed in core EA dis­cus­sions. It is im­por­tant to note that the EA pop­u­la­tion likely cov­ers a much wider range of peo­ple than these highly en­gaged EAs.

Con­vert­ing each of these op­tions into a nu­mer­i­cal point on a five point scale (rang­ing from (1) ‘I do not think any re­sources should be de­voted to this cause’, to (5) ‘This cause should be the top pri­or­ity’),[11] based on our find­ings last year, we ex­pected to see mem­bers of LessWrong and the EA Fo­rum to rank AI Risk highly and Global Poverty and Cli­mate Change low and to do so more than non-mem­bers. In­deed, in the table be­low we do see this trend, with the causes re­ceiv­ing the high­est rat­ing per group high­lighted in dark green, and the cause a group gave the low­est rat­ing to high­lighted in dark red. EAs who are not mem­bers of any of the groups (LessWrong, EA Fo­rum, EA Face­book, GWWC, Lo­cal EA group) give a higher rat­ing to Cli­mate Change and Global Poverty than group mem­bers. We again see a similar pat­tern be­tween EA Fo­rum mem­bers and the broader pop­u­la­tion of the EA Face­book group, the lat­ter of which com­prised ~50% of the sam­ple and are less as­so­ci­ated with in-depth, ad­vanced con­tent than the EA Fo­rum.

We can also see that mem­bers of the groups high­lighted above favour LTF more than the EA sur­vey pop­u­la­tion as a whole, and more than those who are not mem­bers of any of those groups. We re­turn to the sig­nifi­cance of group mem­ber­ship in the as­so­ci­a­tions sec­tion be­low.

Mir­ror­ing this, we see that those who rated them­selves as highly en­gaged give higher mean rat­ings to causes typ­i­cally con­sid­ered longter­mist/​X-risk and those with self-re­ported lower en­gage­ment give these causes lower pri­or­ity. It is in­ter­est­ing to note that “No en­gage­ment” EAs ap­pear to be more favourable to­wards LTF than mild or mod­er­ately en­gaged EAs but re­port among the low­est mean rank­ings of fine-grained causes typ­i­cally as­so­ci­ated with LTF such as AI risk.

Those claiming to be veg*n (veg­e­tar­ian or ve­gan) ap­pear to pri­ori­tize An­i­mal Welfare more than meat eaters, and other groups.[12] Men ap­pear to lean more to­wards Other X-risks and AI Risk than women, and women tend to lean more to­wards Global Poverty, Men­tal Health, An­i­mal Welfare, and Cli­mate Change.[13] The figure be­low shows the differ­ence be­tween male and fe­male in the av­er­age mean cause rat­ings. Men also ap­pear to skew more to­wards LTF when forced to choose while a plu­ral­ity of women chose Global Poverty.[14]

Predictors

Con­trol­ling for pos­si­ble con­founders we have high­lighted above in­de­pen­dently in the de­scrip­tives is an im­por­tant way to re­duce the chance of draw­ing too large an in­fer­ence from any one find­ing. Like last year, we performed re­gres­sions to tease out any as­so­ci­a­tions sug­gested by de­scrip­tive anal­y­sis. We did not have rea­sons to em­ploy a very differ­ent model to the one used in 2018 so we mostly repli­cate it,[15] de­spite the low amount of var­i­ance that it ex­plains (2% to 15%) and prob­lems of mul­ti­col­lin­ear­ity be­tween many hy­poth­e­sized pre­dic­tors. More de­tails on the mod­els and anal­y­sis we use can be found here. We think these stand mostly as an im­por­tant re­minder that cause pri­ori­ti­za­tion is a mul­ti­faceted con­cept that can not be boiled down sim­ply to mem­ber­ship of a cer­tain group.

There ap­pears to be a gen­eral trend in the data of a greater prefer­ence for AI Risk/​LTF causes and a lesser prefer­ence for Global Poverty and Cli­mate Change across LessWrong mem­bers, EA Fo­rum mem­bers, and those with 80,000 Hours coach­ing. How­ever, even in many cases of statis­ti­cally sur­pris­ing as­so­ci­a­tions, the con­fi­dence in­ter­vals around these effects span a range of both plau­si­bly sub­stan­tial and prac­ti­cally in­signifi­cant val­ues. It may be tempt­ing to in­ter­pret the data be­low as rep­re­sent­ing a gen­eral trend for EAs to shift in the di­rec­tion of AI Risk/​LTF, and away from Global Poverty and Cli­mate Change the more in­volved in EA dis­cus­sions (on­line) they be­come. How­ever, this in­ter­pre­ta­tion should, of course, be heav­ily caveated given its post hoc na­ture and the weak­ness of the model over­all.

As­so­ci­a­tions with fine-grained cause areas

Run­ning or­di­nal re­gres­sions on the five-point scales for each cause area points to­wards many of the same as­so­ci­a­tions as in 2018. Th­ese mod­els ex­plained only 2-11% of var­i­ance. It seemed very likely that at least some vari­ables in the mod­els would vi­o­late the pro­por­tional odds as­sump­tion, mean­ing the strength of any as­so­ci­a­tions would vary across the cause rat­ing scale. It was not pos­si­ble to iden­tify which vari­ables vi­o­lated this as­sump­tion or run an auto-fit­ted model which re­laxed this as­sump­tion ap­pro­pri­ately. We there­fore can only re­port the re­sults of a con­strained model which likely over­es­ti­mates the con­sis­tency of any as­so­ci­a­tional strength. This is in ad­di­tion to is­sues of mul­ti­col­lin­ear­ity that tend to in­flate es­ti­mates.

How sub­stan­tial are these (likely over­es­ti­mated) as­so­ci­a­tions? We high­light here only the largest de­tected effects in our data (odds ra­tio close to or above 2 times greater) that would be sur­pris­ing to see, if there were no as­so­ci­a­tions in re­al­ity and we ac­cepted be­ing wrong 5% of the time in the long run.

For a mem­ber of LessWrong, the odds of giv­ing AI risk top pri­or­ity ver­sus the com­bined lower pri­or­ity cat­e­gories are 2.5 times greater (95% con­fi­dence in­ter­val: 1.8 − 3.4) than for non-mem­bers, given that all of the other vari­ables in the model are held con­stant. Mem­bers of LessWrong are also 2 times (1.5- 2.8) more likely to put Global poverty in a lower cat­e­gory than non mem­bers. EA Fo­rum mem­bers are 2 times (1.5- 2.7) less likely to pri­ori­tise Global Poverty than non mem­bers. Those get­ting into EA via TLYCS are 2.3 (1.6 −3.4) times more likely to put Global Poverty in a higher cat­e­gory and 2.2 (1.5 − 3.2) times more likely to rank An­i­mal Welfare highly, than those not get­ting into EA this way. Those who got into EA via 80,000 Hours are 1.9 times (1.5 − 2.4) more likely to pri­ori­tize AI. Those not on the Left or Cen­tre Left are 2.8 times (2.1 − 3.8) less likely to pri­ori­tize Cli­mate Change and males are 1.9 times (1.5 − 2.4) less likely to pri­ori­tize Cli­mate Change. Veg*ns are 7 times(5.5. − 9) more likely to pri­ori­tize An­i­mal Welfare.

As­so­ci­a­tions with forced choice causes

We also sought to see if these same as­so­ci­a­tions ap­peared in the forced choice cause data. The mult­i­no­mial re­gres­sion of the data points to some as­so­ci­a­tions that would be sur­pris­ing, if we as­sumed there was no re­la­tion­ship in re­al­ity and ac­cepted be­ing wrong only 5% of the time in the long run. How­ever, the model ex­plains only ~15% of the vari­a­tion in cause se­lec­tion. Note that the sub­stan­tive as­so­ci­a­tion of most of these fac­tors on the prob­a­bil­ity of choos­ing a cer­tain cause cat­e­gory are quite small. We here high­light just the largest av­er­age marginal effects.

A LessWrong mem­ber’s prob­a­bil­ity of choos­ing the Long Term Fu­ture is 19.8 per­centage points higher (95% con­fi­dence in­ter­val: 13-27 per­centage points) than a non-mem­ber.Hav­ing done 80,000 Hours coach­ing is as­so­ci­ated with a 15 per­centage points (7-23) higher prob­a­bil­ity in choos­ing LTF. Be­ing a mem­ber of the EA Fo­rum or LessWrong are as­so­ci­ated with a 17 per­centage points (9-26) de­crease in the prob­a­bil­ity of choos­ing Global Poverty. Be­com­ing in­volved in EA via GWWC or TLYCS are pos­i­tively as­so­ci­ated with choos­ing Global Poverty (10 per­centage points (5-15), 16 per­centage points (7-25)). TYLCS is also as­so­ci­ated with a 13 per­centage points (3-23) lower prob­a­bil­ity of choos­ing Meta. Fi­nally, be­ing veg*n is as­so­ci­ated with a 32 per­centage points (24-41) in­crease in choos­ing An­i­mal Welfare/​Rights, and 17 per­centage points (12-22) lower prob­a­bil­ity of choos­ing Global Poverty. Most other fac­tors are as­so­ci­ated with effects that are ei­ther sub­stan­tially or statis­ti­cally in­signifi­cant, or both.

Mul­ti­ple Cor­re­spon­dence Analysis

We used mul­ti­ple cor­re­spon­dence anal­y­sis to look for pat­terns in cause pri­ori­ti­za­tion.[16] In gen­eral, ac­tivity, or­ga­ni­za­tion mem­ber­ship and in­volve­ment were most strongly as­so­ci­ated with which causes in­di­vi­d­u­als pri­ori­tized. We started with a fea­ture set that in­cluded vari­ables re­lated to be­ing in­volved, mem­ber­ship, field of study, ac­tivity, self-iden­ti­fied en­gage­ment and iden­ti­fiers such as gen­der, ed­u­ca­tion level, poli­tics and whether an in­di­vi­d­ual was a stu­dent. We re­moved the “Other” cat­e­gory of cause pri­ori­ti­za­tion be­cause of the low re­sponse rate, re­spon­dents that pro­vided the “not con­sid­ered/​not sure” op­tion for any cause and re­spon­dents with NAs in any of the pre­dic­tor vari­ables. In ad­di­tion, we re­moved the “Other” cat­e­gory of mem­ber­ship be­cause a small num­ber of re­spon­dents (the Dank_memes group) had an over­whelming effect on the anal­y­sis. Our fi­nal sam­ple size was 1,307 re­spon­dents. We cre­ated a com­bined cause rank­ing of three lev­els (top, medium, and low) for each cause.

The re­sult­ing anal­y­sis ex­plained about 14% of the var­i­ance in the first 2 di­men­sions, which is a typ­i­cal amount for this type of mul­ti­vari­ate anal­y­sis. An ex­am­i­na­tion of the most in­fluen­tial fac­tors in­di­cated that those re­lated to ac­tivity were pre­dom­i­nant.

For a typ­i­cal pri­or­ity of AI Risk, we see that top rank­ing of this cause is re­lated to high lev­els of ac­tivity in the EA com­mu­nity (where vari­a­tion along the first axis is most im­por­tant). We can con­trast this pat­tern with that of Cli­mate Change, where in­di­vi­d­u­als less ac­tive in the EA com­mu­nity are more likely to rank this as a top cause.

We at­tempted to gain some greater in­sight by us­ing the self-re­ported EA en­gage­ment scale as a proxy for ac­tivity. With the ac­tivity vari­ables re­moved, the anal­y­sis ex­plained about 11% of the vari­a­tion. Im­por­tant ex­plana­tory vari­ables were re­lated to mem­ber­ship, in­volve­ment and diet.

How­ever, the ba­sic pat­tern of de­gree of in­volve­ment in the EA com­mu­nity as a pre­dic­tor of rank­ing of AI Risk vs Cli­mate Change did not al­ter.

The rank­ing of Global Poverty was similar to Cli­mate Change in its cor­re­la­tion with less in­volve­ment in the EA com­mu­nity (not shown), while that of Cause Pri­ori­ti­za­tion, Meta, and other ex­is­ten­tial risks (not shown) were similar to AI Risk (be­low).

In ad­di­tion, An­i­mal Welfare was higher ranked by ac­tive mem­bers in the EA com­mu­nity than less ac­tive mem­bers, and was fur­ther re­lated to dietary choices

Pre­dic­tors of all other causes were quite poorly re­solved by this anal­y­sis.

Credits

The an­nual EA Sur­vey is a pro­ject of Re­think Char­ity with anal­y­sis and com­men­tary from re­searchers at Re­think Pri­ori­ties.

This es­say was writ­ten by Neil Dul­laghan with con­tri­bu­tions from Kim Cud­ding­ton. Thanks to David Moss and Peter Hur­ford for com­ments.

If you like our work, please con­sider sub­scribing to our newslet­ter. You can see all our work to date here.

Other ar­ti­cles in the EA Sur­vey 2019 Series can be found here

Appendix

Re­spon­dents were asked to in­di­cate how far they pri­ori­tised a range of fine-grained cause ar­eas from the fol­low­ing op­tions:

  • This cause should be the top pri­or­ity (Please choose one)

  • This cause should be a near-top priority

  • This cause de­serves sig­nifi­cant re­sources but less than the top priorities

  • I do not think this is a pri­or­ity, but should re­ceive some resources

  • I do not think any re­sources should be de­voted to this cause

  • Not con­sid­ered /​ Not sure

2,033 out of 2,513 (81%) self-iden­ti­fied EAs in our sam­ple gave a re­sponse re­gard­ing at least one cause area. This ques­tion was a multi-se­lect ques­tion which al­lowed re­spon­dents to se­lect mul­ti­ple top causes (though in­cluded a prompt to limit this some­what). In our an­a­lyzes be­low we ex­clude “Not con­sid­ered/​Not sure” re­sponses.

In this year’s sur­vey we added a ques­tion that asked re­spon­dents to pick only one top cause from among the tra­di­tional broader di­vi­sion be­tween EA cause ar­eas:

  • Global Poverty

  • An­i­mal Welfare

  • Meta

  • Long Term Fu­ture /​ Catas­trophic and Ex­is­ten­tial Risk Reduction

  • Other

2,024 out of 2,513 (81%) re­spon­dents gave an an­swer to this forced choice ques­tion, in­clud­ing 99% of those who an­swered the pre­vi­ous ques­tion.

We also asked When you first be­came in­volved with EA, was the cause area you most pri­ori­tized differ­ent from the one you do now? If so, which was it? 1,944 (77%) re­spon­dents se­lected one of the two choices:

  • I most pri­ori­tized the same cause that I do now

  • I most pri­ori­tized a differ­ent cause (please spec­ify which)



  1. The per­centages in the graph differ very slightly from the pre­vi­ous one be­cause any re­sponse for one of the long term fu­ture causes was treated as a sin­gle re­sponse, so the to­tal num­ber of “top pri­or­ity” re­sponses is lower. ↩︎

  2. 490 re­spon­dents did not offer a re­sponse to the forced choice ques­tion, but they also mostly did not offer a top pri­or­ity in the mul­ti­ple cause ques­tion. ↩︎

  3. It should be kept in mind that EAs could se­lect mul­ti­ple top pri­or­ity causes in the first in­stance so there is some dou­ble-count­ing at work here. For ex­am­ple, an in­di­vi­d­ual could have rated both AI Risk and Nu­clear Se­cu­rity as “top pri­ori­ties” and when forced to pick only one opted for LTF, mean­ing there are two dat­a­points lead­ing to LTF. How­ever, this only af­fects the aes­thetic of the rib­bon plot and not the over­all trends. ↩︎

  4. We omit­ted those who re­sponded “Not Sure/​Not Con­sid­ered” ↩︎

  5. Note that that there are some con­cerns ex­pressed in the com­ments sec­tion of that post that this sur­vey did not con­tain a rep­re­sen­ta­tive sam­ple of EA lead­ers.Also it asked “What (rough) per­centage of re­sources should the EA com­mu­nity de­vote to the fol­low­ing ar­eas over the next five years?” ↩︎

  6. Their cause are of “Nav­i­gat­ing emerg­ing tech­nolo­gies” high­lights ad­vanced ar­tifi­cial in­tel­li­gence and syn­thetic biol­ogy, and their “Re­search and ca­pac­ity build­ing” sec­tion high­lights “im­prov­ing in­sti­tu­tional de­ci­sion mak­ing and build­ing the ‘effec­tive al­tru­ism’ com­mu­nity. While their lat­est “Our list of ur­gent global prob­lems” page high­lights AI Risk and Biose­cu­rity at the top, but also Nu­clear Se­cu­rity and Cli­mate Change. ↩︎

  7. Lo­gis­tic re­gres­sions of each age or years in EA fac­tor against chang­ing causes sug­gests a 13% in­crease in the prob­a­bil­ity of hav­ing a differ­ent cause for each year in EA, and roughly 10% in­crease for be­ing 10 years older, or 10 years older when you joined EA. The nom­i­nal (“un­cor­rected”) p-val­ues were 0.004, 0.0001, 0.0001 re­spec­tively, but these have not been “cor­rected” for mul­ti­ple-hy­poth­e­sis test­ing. ↩︎

  8. Ex­cept those in 2011 and 2014 (52% and 50% changed re­spec­tively), how­ever the differ­ences are quite small and there were very EAs from these years too. ↩︎

  9. (1) No en­gage­ment: I’ve heard of effec­tive al­tru­ism, but do not en­gage with effec­tive al­tru­ism con­tent or ideas at all
    (2) Mild en­gage­ment: I’ve en­gaged with a few ar­ti­cles, videos, pod­casts, dis­cus­sions, events on effec­tive al­tru­ism (e.g. read­ing Do­ing Good Bet­ter or spend­ing ~5 hours on the web­site of 80,000 Hours)
    (3) Moder­ate en­gage­ment: I’ve en­gaged with mul­ti­ple ar­ti­cles, videos, pod­casts, dis­cus­sions, or events on effec­tive al­tru­ism (e.g. sub­scribing to the 80,000 Hours pod­cast or at­tend­ing reg­u­lar events at a lo­cal group). I some­times con­sider the prin­ci­ples of effec­tive al­tru­ism when I make de­ci­sions about my ca­reer or char­i­ta­ble dona­tions.
    (4) Con­sid­er­able en­gage­ment: I’ve en­gaged ex­ten­sively with effec­tive al­tru­ism con­tent (e.g. at­tend­ing an EA Global con­fer­ence, ap­ply­ing for ca­reer coach­ing, or or­ga­niz­ing an EA meetup). I of­ten con­sider the prin­ci­ples of effec­tive al­tru­ism when I make de­ci­sions about my ca­reer or char­i­ta­ble dona­tions.
    (5) High en­gage­ment: I am heav­ily in­volved in the effec­tive al­tru­ism com­mu­nity, per­haps helping to lead an EA group or work­ing at an EA-al­igned or­ga­ni­za­tion. I make heavy use of the prin­ci­ples of effec­tive al­tru­ism when I make de­ci­sions about my ca­reer or char­i­ta­ble dona­tions. ↩︎

  10. A sim­ple lo­gis­tic re­gres­sion sug­gests those in higher en­gage­ment lev­els are 1.9 times (95% CIs: 1.8 −2.1) more likely to have changed cause those those in the lower lev­els of en­gage­ment com­bined (p<0.0001). Chi-squared tests for each group sug­gests that we can act as if group mem­ber­ship and cause prefer­ence chang­ing are not in­de­pen­dent and not be wrong more than 5% of the time in the long run, as­sum­ing a null hy­poth­e­sis that there is no as­so­ci­a­tion. Chi-squared is an om­nibus test and only tells us some­thing is sur­pris­ing in the data, but not what is driv­ing this sur­prise. ↩︎

  11. We recog­nise that the mean of a Lik­ert scale as a mea­sure of cen­tral ten­dency has limited mean­ing in in­ter­pre­ta­tion.Though im­perfect it’s un­clear that re­port­ing the means is a worse solu­tion than other op­tions the team dis­cussed. ↩︎

  12. Both Welch t-test and Kruskal–Wal­lis tests found a sur­pris­ing re­sult (p-value<0.0001) ↩︎

  13. Welch t-tests of gen­der against these scaled cause rat­ings have p-val­ues of 0.003 or lower, so we can act as if the null hy­poth­e­sis of no differ­ence be­tween gen­ders is false, and we would not be wrong more than 5% of the time in the long run. It should be noted that this is not an ideal sig­nifi­cance test for or­di­nal data, Kruskal–Wal­lis tests sug­gest sig­nifi­cant as­so­ci­a­tions for these same causes of p-val­ues of 0.002 or lower. ↩︎

  14. A chi-squared test sug­gests a sur­pris­ing as­so­ci­a­tion Pear­son chi2(4) = 14.6545 Pr = 0.005 ↩︎

  15. We in­clude the data from this year’s ques­tion self-re­ported en­gage­ment in EA, and omit the sub­ject stud­ied data due to con­cerns that any se­lec­tions would be due to cherry-pick­ing or some other bias. ↩︎

  16. Kim Cud­ding­ton performed the anal­y­sis and writ­ing for this sec­tion. ↩︎