EA Survey 2018 Series: Cause Selections


  • EAs rate a wide va­ri­ety of differ­ent causes as re­quiring “sig­nifi­cant re­sources”.

  • Global Poverty re­mains the most pop­u­lar sin­gle cause in our sam­ple as a whole.

  • There are sub­stan­tial differ­ences in cause pri­ori­ti­sa­tion across groups. On the whole, more in­volved groups ap­pear to pri­ori­tise Global Poverty and Cli­mate Change less and AI and Long-Term Fu­ture causes more.


Re­spon­dents were asked to in­di­cate how far they pri­ori­tised differ­ent causes from the fol­low­ing op­tions:

  • This cause should be the top pri­or­ity (Please choose one)

  • This cause should be a near-top priority

  • This cause de­serves sig­nifi­cant re­sources but less than the top priorities

  • I do not think this is a pri­or­ity, but should re­ceive some resources

  • I do not think any re­sources should be de­voted to this cause

  • Not con­sid­ered /​ Not sure

2455 out of 2601 (94%) self-iden­ti­fied EAs in our sam­ple gave a re­sponse re­gard­ing at least one cause.

Top Causes

The sim­plest (though crud­est) way to analyse these re­sponses is to look at the num­ber of ‘top pri­or­ity’ re­sponses for each cause.

As in pre­vi­ous years, Global Poverty was the largest sin­gle cause out of the op­tions pre­sented.

See the graph from 2017 (with slightly differ­ent cat­e­gories) be­low for a rough com­par­i­son.

Note: images can be viewed in full size if opened in a new tab.

Th­ese analy­ses clearly do not cor­re­spond pre­cisely to the tra­di­tional broader di­vi­sion be­tween EA cause ar­eas (Global Poverty, Ex­is­ten­tial Risk/​Long-Term Fu­ture etc.), how­ever, due to the in­clu­sion of mul­ti­ple, more fine-grained cat­e­gories within Long-Term Fu­ture, in essence, ‘split­ting the vote.’

In prin­ci­ple, one could sim­ply ac­count for split­ting re­sponses into more fine-grained causes through com­bin­ing the re­sponses for all ‘Ex­is­ten­tial Risk/​Catas­trophic Risk/​Long Term Fu­ture’ causes to­gether. As a sig­nifi­cant num­ber of re­spon­dents (403, 16%) se­lected mul­ti­ple causes as ‘top pri­or­ity’ it would also be nec­es­sary to ac­count for this by count­ing each re­spon­dent se­lect­ing at least one cause in the gen­eral cat­e­gory as one re­sponse for that cat­e­gory.

A fur­ther com­pli­ca­tion, how­ever, is that what the bound­aries of the rele­vant broader clusters are and which re­sponses should fit them, is un­clear. AI, Biose­cu­rity, Nu­clear Se­cu­rity and Ex­is­ten­tial Risk (other) seem fairly un­con­tro­ver­sially to fit into a rough, com­bined “Long-Term Fu­ture/​Ex­is­ten­tial (or Catas­trophic) Risk” cluster, though plau­si­bly other “top pri­or­ity” re­sponses (e.g. im­prov­ing de­ci­sion-mak­ing, cause pri­ori­ti­sa­tion, cli­mate change or an­i­mal welfare) might also be mo­ti­vated by Long-Term Fu­ture (if not ex­is­ten­tial risk) con­sid­er­a­tions. Similarly, at least some cause pri­ori­ti­sa­tions which are nom­i­nally mo­ti­vated by con­cern for the long-term fu­ture, would per­haps not in­tu­itively fit with this cat­e­gory as usu­ally un­der­stood by EAs (for ex­am­ple, pri­ori­ti­sa­tions of moral cir­cle ex­pan­sion or wild-an­i­mal suffer­ing), sug­gest­ing the need for more fine-grained cause op­tions. For fu­ture iter­a­tions of the EA Sur­vey, to get bet­ter in­sight into these is­sues, we are con­sid­er­ing adding a sep­a­rate ques­tion ask­ing about Long-Term Fu­ture cause pri­ori­ti­sa­tion as whole, alongside more spe­cific cause op­tions.

In the graph be­low, we com­bine AI, Biose­cu­rity, Nu­clear Se­cu­rity and Ex­is­ten­tial Risk (other)) to­gether into one Long-Term Fu­ture cat­e­gory. When do­ing so, the gap be­tween Global Poverty and (com­bined) Long-Term Fu­ture is sub­stan­tially re­duced, though still sig­nifi­cant.

Full Range of Responses

Look­ing at the full range of re­sponses (rather than just sin­gle “top cause” se­lec­tion) sug­gests a more even dis­tri­bu­tion of in­ter­est in causes across EAs.

In the lik­ert graph, causes are listed in or­der of the per­cent of re­spon­dents giv­ing that cause as a near-top or top pri­or­ity. Most listed causes re­ceived sub­stan­tial sup­port, with no cause re­ceiv­ing fewer than 50% of re­spon­dents sug­gest­ing that it should re­ceive at least “sig­nifi­cant re­sources.” Most causes were judged by 31-48% of re­spon­dents as be­ing the “top pri­or­ity” or “near the top pri­or­ity” (the ex­cep­tions be­ing Nu­clear Se­cu­rity and Men­tal Health, which re­ceived only 22% of re­spon­dents rank­ing each of them that highly, and Global Poverty which 65% of re­spon­dents rated as a top or near top pri­or­ity).

If we were to con­vert each of these op­tions into a nu­mer­i­cal point on a five point scale (rang­ing from (1) ‘I do not think any re­sources should be de­voted to this cause’, to (5) ‘This cause should be the top pri­or­ity’), the mean rat­ings for all but two of the causes (Men­tal Health and Nu­clear Se­cu­rity) would be be­tween 3 and 4. Notably the me­dian rat­ing for ev­ery cause was 3, ex­cept for global poverty, which was rated 4.

Pre­dic­tors of cause preference

Iden­ti­fy­ing pre­dic­tors of in­di­vi­d­ual cause pri­ori­ti­sa­tion was sub­stan­tially more difficult than iden­ti­fy­ing in­fluences on dona­tion data. In this sec­tion we pre­sent the re­sults of both mult­i­no­mial and or­di­nal re­gres­sions as well as Mul­ti­ple Cor­re­spon­dence Anal­y­sis. None of our mod­els man­age to ac­count for more than 17% of the var­i­ance in cause prefer­ence data. They do, how­ever, all ap­pear to point to a similar set of fac­tors be­ing in­fluen­tial. Any in­ter­pre­ta­tion of these find­ings should be highly ten­ta­tive.

We also pre­sent sim­ple de­scrip­tive analy­ses of the causes pri­ori­tised by differ­ent groups. Of course, these analy­ses es­pe­cially can­not be read as sug­gest­ing a causal in­fluence on cause se­lec­tion, but they do show what causes differ­ent groups (such as EA Fo­rum mem­bers vs non-mem­bers, within our sam­ple) pri­ori­tise.


We ex­am­ined differ­ences in cause pri­ori­ti­sa­tion across var­i­ous differ­ent groups within our sam­ple.

Given that the EA Sur­vey ap­pears to re­ceive re­sponses from a fairly large num­ber of peo­ple who are fairly new to the move­ment and/​or not in­volved in many core EA groups or ac­tivi­ties, we thought it would be in­ter­est­ing to ex­am­ine the re­sponses of a nar­rower pop­u­la­tion who were plau­si­bly more ac­tively en­gaged and in­formed in core EA dis­cus­sions. We chose ini­tially to look at mem­bers of the EA Fo­rum, who com­prised about 20% of the to­tal 2018 sam­ple, at 523 peo­ple (we are in­formed that the num­ber of ac­tive mem­bers on the EA Fo­rum is in the bal­l­park of ~500 peo­ple, though there are con­sid­er­ably more in­ac­tive mem­bers).

There were sub­stan­tial differ­ences in “top pri­or­ity” cause se­lec­tions be­tween Fo­rum mem­bers and non-mem­bers. Most strik­ingly, Cli­mate Change was the third high­est pri­ori­tised cause in the sam­ple as a whole, but was sig­nifi­cantly more pop­u­lar among re­spon­dents who weren’t a mem­ber of the Fo­rum, while be­ing among the least of­ten se­lected causes among Fo­rum mem­bers. Like­wise, in line with our other analy­ses, AI, Cause Pri­ori­ti­sa­tion and Meta Char­ity re­ceived a much higher pro­por­tion of peo­ple se­lect­ing them as top pri­or­ity among Fo­rum mem­bers, whereas Global Poverty was se­lected by a much lower por­tion.

In­ter­est­ingly, this pat­tern was ev­i­dent even in the case of the broader pop­u­la­tion of the EA Face­book group, who com­prised more than 50% of the sam­ple and is less as­so­ci­ated with in-depth, ad­vanced con­tent than the EA Fo­rum. EA Face­book mem­bers had lower sup­port for Poverty and Cli­mate Change and higher sup­port for AI, Cause Pri­ori­ti­sa­tion and Meta Char­i­ties than non-mem­bers.

We also ex­am­ined differ­ences be­tween re­spon­dents who were LessWrong mem­bers or not. As one would ex­pect, given the his­tor­i­cal fo­cus of the site, there was sub­stan­tially higher sup­port for AI among LessWrong mem­bers than non-mem­bers.

Look­ing at mean prefer­ences on the 5-point scale, rather than sim­ply “top pri­or­ity” se­lec­tions, across causes for these sub-groups, in the graph be­low, Global Poverty (and Cli­mate Change) and AI Risks ap­pear to fol­low the op­po­site pat­tern from each other: with Global Poverty be­ing most en­dorsed by EAs who are nei­ther a mem­ber of the EA Fo­rum nor LessWrong and least by those who are mem­bers of both, and the con­verse for AI risk.

We also ex­am­ined differ­ences in cause pri­ori­ti­sa­tion ac­cord­ing to gen­der. Ac­cord­ing to the sur­vey data, a plu­ral­ity of both men and women chose Global Poverty as the top cause. How­ever, men and women ap­pear to run in op­po­site di­rec­tions on Cli­mate Change and AI Risks. 15.7% of women chose Cli­mate Change as top pri­or­ity (mak­ing it the sec­ond most preferred cause in this group) and 9.3% chose AI Risks, but with men this is the op­po­site (7.9% to 22.6%, re­spec­tively). Similarly, look­ing at the mean score across the scale, the trend is the same, with about a 1 point gen­der gap be­tween women’s prefer­ence for Cli­mate Change and men’s prefer­ence for AI Risks.

The fol­low­ing graph shows the ex­tent of the gen­der differ­ence across each cause. The Y-axis shows the differ­ence in mean cause prefer­ence score be­tween women and men.

Of course, to re­it­er­ate what we said above, such de­scrip­tive differ­ences across groups in our sam­ple should not be taken as sug­gest­ing that gen­der causes differ­ences in cause pri­ori­ti­sa­tion e.g. be­cause of pos­si­ble con­founders.

As one might ex­pect, the iden­ti­fi­ca­tion of an­i­mal welfare as a top pri­or­ity was strongly as­so­ci­ated with dietary choice. Un­like in our anal­y­sis last year, when we only dis­cussed veg­e­tar­i­ans and ve­g­ans as a sin­gle cat­e­gory, this year we can re­veal a di­vide be­tween ve­g­ans and oth­ers on an­i­mal welfare pri­ori­ti­sa­tion. Ve­gans make up a large plu­ral­ity of those who rank an­i­mal welfare as a top or near top cause, but the per­centage of veg­e­tar­i­ans and those who eat meat don’t differ sub­stan­tially. As shown in the first graph be­low, 76.9% of ve­g­ans rank an­i­mal welfare as a top or near top pri­or­ity, com­pared to only 43.7% of veg­e­tar­i­ans and 19% of those who eat meat. Similarly, as the sec­ond graph shows, 47% of those se­lect­ing An­i­mal Welfare as the top or near top pri­or­ity were ve­g­ans, and 27% were veg­e­tar­i­ans.

Re­gres­sions: Top Cause Selection

To in­ves­ti­gate po­ten­tial pre­dic­tors of differ­ent cause se­lec­tions, we ran mult­i­no­mial logit re­gres­sions on Top Pri­or­ity cause choice, in or­der to see the rel­a­tive effects of po­ten­tial pre­dic­tors of choice while con­trol­ling for pos­si­ble con­founders. As Global Poverty is the most pop­u­lar top pri­or­ity it was used as the base cat­e­gory in the re­gres­sion.

Draw­ing on trends in the de­scrip­tive statis­tics we looked at what EA groups re­spon­dents are mem­bers of, diet, de­mo­graph­ics, and how and when in­di­vi­d­u­als be­came in­volved in EA. (link to re­gres­sion table)

Below we pre­sent the av­er­age marginal effects from sub­stan­tive (effect size ±10 per­centage points) and statis­ti­cally sig­nifi­cant pre­dic­tors for some se­lect causes. While the fol­low­ing re­gres­sion re­sults are sug­ges­tive, the mod­els’ abil­ity to ex­plain a large amount of vari­a­tion in out­comes was quite low (Pseudo R^2: 0.17).

LessWrong mem­ber­ship was as­so­ci­ated with an in­crease in like­li­hood of se­lect­ing AI Risks as the top cause, as well as a lower like­li­hood of rank­ing Cli­mate Change and An­i­mal Welfare as a top pri­or­ity. Like­wise, be­ing a mem­ber of the EA Fo­rum was as­so­ci­ated with a lower like­li­hood of se­lect­ing Global Poverty as the top cause and a higher chance of se­lect­ing AI. This effect was even larger for those who were both mem­bers of the EA Fo­rum and LessWrong. Those who re­port be­com­ing in­volved in EA via GWWC were more likely to to se­lect Global Poverty as the top cause and less likely to se­lect AI as the top cause. In con­trast, those who be­came in­volved via per­sonal con­tact or 80,000 Hours were were more likely to to se­lect AI as the top cause and less likely to se­lect Global Poverty as the top cause.

There also ap­pears to be a gen­eral trend of a greater prefer­ence for AI Risks and a lesser prefer­ence for Global Poverty and Cli­mate Change across LessWrong mem­bers, EA Fo­rum mem­bers and mem­bers of both (LessWrong-EA Fo­rum), though other than for those effects men­tioned in the first para­graph, the Con­fi­dence In­ter­vals (shown on the figure above) crossed zero. It may tempt­ing to in­ter­pret this as rep­re­sent­ing a gen­eral trend for EAs to shift in the di­rec­tion of AI Risks, and away from Global Poverty and Cli­mate Change the more in­volved in EA (on­line) dis­cus­sions they be­come. Though this in­ter­pre­ta­tion should, of course, be heav­ily caveated given its post hoc na­ture and the weak­ness of the model over­all.

In ad­di­tion, ve­g­anism was strongly as­so­ci­ated with a greater like­li­hood of se­lect­ing An­i­mal Welfare as the top cause. It also ap­peared that women were more likely to se­lect Global Poverty and Cli­mate change as the top cause and less likely to se­lect AI Risk.

Re­gres­sion: Full Scale

Look­ing at only the Top Pri­or­ity cause se­lec­tion misses out on a large chunk of the data and opinion within the EA com­mu­nity. Nev­er­the­less, run­ning or­di­nal re­gres­sions on the five-point scale points to­wards many of the same pre­dic­tors over­all. For rea­sons of space and sim­plic­ity only the most promi­nent re­sults will be dis­cussed. [For or­dered logit mod­els and more av­er­age marginal effects click here]

As sug­gested by the Top Pri­or­ity only model, in­di­ca­tors of be­ing highly en­gaged in EA dis­cus­sions, such as mem­ber­ship the EA Fo­rum, the EA Face­book Page and LessWrong, and be­com­ing more in­volved via Per­sonal Con­tacts are as­so­ci­ated with plac­ing AI Risks in a higher pri­or­ity cat­e­gory and Global Poverty and/​or Cli­mate Change in lower cat­e­gories. In con­trast, get­ting into EA through The Life You Can Save (TLYCS) and Giv­ing What We Can (GWWC) are as­so­ci­ated with plac­ing Global Poverty and Cli­mate Change higher on the scale. This also seems fairly in­tu­itive, given the his­tor­i­cal fo­cus of TLYCS and GWWC on Global Poverty.

Get­ting into EA via 80,000 Hours also seems to be as­so­ci­ated with rat­ing Global Poverty lower on the scale and pri­ori­tis­ing AI, Cause Pri­ori­ti­sa­tion and other long-term fu­ture causes, along with Men­tal Health and Im­prov­ing Ra­tion­al­ity, more highly. In ad­di­tion, re­ceiv­ing 80,000 Hours coach­ing is as­so­ci­ated with a lower rat­ing of Poverty and Cli­mate Change and higher rat­ings of AI, Cause Pri­ori­ti­sa­tion and Biose­cu­rity. This is im­por­tant given the grow­ing in­fluence of 80,000 Hours as a source for new EAs. Hav­ing shifted one’s ca­reer be­cause of EA is also as­so­ci­ated with plac­ing AI Risks higher on the scale and with plac­ing Global Poverty and Cli­mate Change lower on the scale.

In most cases, when a per­son first heard of EA didn’t prove to be sub­stan­tive or sig­nifi­cant, when con­trol­ling for other fac­tors, though there were small as­so­ci­a­tions be­tween hear­ing about EA more re­cently and in­creased sup­port for Cli­mate Change and Men­tal Health, as well as even smaller ones for Cause Pri­ori­ti­sa­tion, Im­prov­ing Ra­tion­al­ity and Nu­clear Se­cu­rity. At first glance, this runs counter to the differ­ence found be­tween “vet­er­ans” and “new­com­ers” in pre­vi­ous analy­ses, but is un­der­stand­able given that newer mem­bers are sig­nifi­cantly less likely to be mem­bers of LessWrong or the EA Fo­rum (-0.2561 , p<0.01), which we do find to be pre­dic­tors.

An EA’s age also ap­pears to play a role, with older EAs more likely to give Cli­mate Change top pri­or­ity and younger EAs more likely to give top pri­or­ity to AI Risk.

Mul­ti­ple Cor­re­spon­dence Analysis

We also used mul­ti­ple cor­re­spon­dence anal­y­sis (MCA) to look for pat­terns in cause pri­ori­ti­za­tion in our cat­e­gor­i­cal vari­ables. Vari­ables re­lat­ing to get­ting more in­volved in EA in differ­ent ways and mem­ber­ship in EA groups as well as gen­der, diet, poli­tics and ca­reer coach­ing were ex­am­ined. About 13% of ob­served var­i­ance was ex­plained in the first two axes (link to ex­ter­nal doc­u­ment). EA Face­book mem­ber­ship, lo­cal group mem­ber­ship, EA Fo­rum mem­ber­ship, and be­com­ing more in­volved in EA through a per­sonal con­tact or lo­cal group, in that or­der, were the top five con­tribut­ing vari­ables to the first axis, where be­ing a mem­ber or be­ing in­volved with any of these groups cor­re­sponds to the right side of the axis. The top 5 con­trib­u­tors to the 2nd axis were LW mem­ber­ship, get­ting more in­volved via SSC and TLYCS, left-lean­ing poli­tics and fe­male gen­der,where LW mem­ber­ship or SS in­volve­ment cor­re­spond to the top of the 2nd axis, and TLYCS mem­ber­ship, left- poli­tics and fe­male gen­der cor­re­spond to the bot­tom of this axis

Gen­er­ally, less in­volve­ment in the EA com­mu­nity cor­re­sponds to as­sign­ing a high rank­ing of Cli­mate Change and Global Poverty. In this MCA bi­plot, the point cloud of in­di­vi­d­u­als is colour coded for cause pri­ori­ti­za­tion (el­lipses give 95% con­fi­dence in­ter­vals), and shown against the po­si­tion­ing of the vari­ables on the first two axes (to aid in­ter­pre­ta­tion only the top 20 con­tribut­ing vari­ables are shown).

This pat­tern con­trasts with that ob­served for the rank­ing of AI and X-risks where rank­ing pos­i­tively cor­re­sponds to EA com­mu­nity en­gage­ment.


EA Sur­vey data sug­gests that EAs con­tinue to judge that a wide ar­ray of causes war­rant “sig­nifi­cant re­sources” and most listed causes are judged to be ei­ther the top pri­or­ity or “near top pri­or­ity” by more than a third of re­spon­dents.

We found sub­stan­tial differ­ences in cause pri­ori­ti­sa­tion across differ­ent groups within our sam­ple, with AI and other Long-Term Fu­ture causes re­ceiv­ing sig­nifi­cantly more sup­port (and Global Poverty less) among mem­bers of EA groups like the Fo­rum and EA Face­book. Though nei­ther our re­gres­sions nor our Mul­ti­ple Cor­re­spon­dence Anal­y­sis could ex­plain much of the var­i­ance in cause pri­ori­ti­sa­tion and so should be in­ter­preted very ten­ta­tively, they seem to point in a similar di­rec­tion, with var­i­ous forms of group mem­ber­ship, sug­ges­tive of more in­volve­ment in EA, be­ing as­so­ci­ated with greater sup­port for Long-Term Fu­ture causes.


This con­cludes the planned posts for the 2018 EA Sur­vey Series!

We will, how­ever, also be fol­low­ing up with some sup­ple­men­tary posts ex­am­in­ing in­volve­ment in differ­ent EA groups and in­fluences on in­volve­ment, the ge­og­ra­phy of EA and look­ing in more depth at GWWC pledge re­ten­tion and EA growth met­rics.


This post was writ­ten and with anal­y­sis by David Moss, Neil Dul­laghan and Kim Cud­ding­ton.

Thanks to Peter Hur­ford, Tee Bar­nett, Luisa Ro­driguez, Derek Foster, Ja­son Schukraft and oth­ers for re­view and edit­ing.

The an­nual EA Sur­vey is a pro­ject of Re­think Char­ity with anal­y­sis and com­men­tary from re­searchers at Re­think Pri­ori­ties.

Sup­port­ing Documents

Other Ar­ti­cles in the 2018 EA Sur­vey Series Fu­ture ar­ti­cles we write about the 2018 Sur­vey will be added here.

I—Com­mu­nity De­mo­graph­ics & Characteristics

II—Distri­bu­tion & Anal­y­sis Methodology

III—How do peo­ple get in­volved in EA?

IV—Sub­scribers and Identifiers

V—Dona­tion Data

Prior EA Sur­veys con­ducted by Re­think Charity

The 2017 Sur­vey of Effec­tive Altruists

The 2015 Sur­vey of Effec­tive Altru­ists: Re­sults and Analysis

The 2014 Sur­vey of Effec­tive Altru­ists: Re­sults and Analysis