Correlations Between Cause Prioritization and the Big Five Personality Traits

Late Edit: This post re­ceived way more at­ten­tion than I ex­pected. For im­por­tant con­text, please see David Moss’s first com­ment, es­pe­cially his helpful vi­su­al­iza­tion. “One thing worth bear­ing in mind is that these are very small pro­por­tions of the re­sponses over­all...” I am ul­ti­mately talk­ing about small groups of peo­ple within the to­tal num­ber of sur­vey re­spon­dents, and al­though I think my claims are true, I be­lieve they are triv­ially so; I cre­ated this post largely for fun and prac­tice, not for mak­ing im­por­tant claims.

Note to EA Fo­rum users: Please par­don the in­tro­duc­tory con­tent; this post is for shar­ing with my class­mates and pro­fes­sors who are oth­er­wise un­aware of the EA move­ment.

Con­tent warn­ing: Fre­quen­tist statistics

The effec­tive al­tru­ism com­mu­nity is a group of nerds, but in­stead of nerd­ing out about train en­g­ines, Star Wars, or 18th-cen­tury sword-fight­ing, they nerd out about one ques­tion: Given limited re­sources and all of hu­man­ity’s ac­cu­mu­lated knowl­edge about the so­cial and phys­i­cal sci­ences, what is the most cost-effec­tive way to im­prove the world?

While the fo­cus be­gan on figur­ing out which char­ity is the best place to spend your marginal dol­lar, and much work still fo­cuses on how to do that, the EA com­mu­nity has ex­panded to ques­tions of how an­a­lytic, al­tru­is­tic-minded peo­ple should best al­lo­cate their time and so­cial cap­i­tal, as well.

Peo­ple in the com­mu­nity have set­tled on sev­eral pos­si­ble an­swers to the ques­tion, “Of all the prob­lems to work on, what should mem­bers of the EA com­mu­nity fo­cus on the most?” Some ex­am­ples of those an­swers in­clude im­prov­ing an­i­mal welfare, global poverty re­duc­tion, and im­prov­ing biose­cu­rity mea­sures against en­g­ineered or ac­ci­den­tal pan­demics. (Notably, mem­bers of the com­mu­nity per­son­ally pre­pared for COVID weeks be­fore their gov­ern­ments en­acted emer­gency or­ders.)

For years, I’ve as­sumed that the differ­ences in cause area se­lec­tion are de­ter­mined solely by peo­ple’s prior be­liefs, i.e. if you be­lieve an­i­mals are “moral pa­tients” in the philos­o­phy lingo, then you’re more likely to pri­ori­tize an­i­mal welfare; if you be­lieve cur­rently liv­ing peo­ple are moral pa­tients and peo­ple who haven’t been born yet are not, then you’re more likely to pri­ori­tize global poverty re­duc­tion (over e.g. ex­is­ten­tial risk re­duc­tion).

How­ever, with the fresh ac­qui­si­tion of some ba­sic data sci­ence skills and some anonymized sur­vey data, I thought of an in­ter­est­ing ques­tion: Do a per­son’s per­son­al­ity traits af­fect which cause area they’re likely to pri­ori­tize? And if so, how?

You see, in 2018, the EA-af­fili­ated (but not me-af­fili­ated!) or­ga­ni­za­tion Re­think Char­ity in­cluded op­tional ques­tions at the end of their an­nual com­mu­nity sur­vey which recorded peo­ple’s scores on the Big Five per­son­al­ity traits, so we have rough data on how nearly 1200 mem­bers of the EA com­mu­nity score on traits of open­ness, ex­tro­ver­sion, con­scien­tious­ness, agree­able­ness, and “emo­tional sta­bil­ity” (in the sur­vey data and in this anal­y­sis, the op­po­site of the trait usu­ally la­beled “neu­roti­cism” in Big Five in­ven­to­ries).

If you’re already fa­mil­iar with the EA com­mu­nity, then just for fun, you could try mak­ing some guesses about the re­la­tion­ships be­tween per­son­al­ity traits and cause pri­ori­ti­za­tion be­fore you scroll down any fur­ther.

In the in­ter­est of trans­par­ent cal­ibra­tion, I’ll di­vulge the three con­jec­tures I jot­ted down prior to run­ning any of my statis­ti­cal tests. I ex­pected higher open­ness to cor­re­late with AI safety pri­ori­ti­za­tion, higher con­scien­tious­ness to cor­re­late with an­i­mal welfare pri­ori­ti­za­tion, and lower emo­tional sta­bil­ity to cor­re­late with pri­ori­tiz­ing men­tal health in­ter­ven­tions. None of my pre­dic­tions were borne out by my anal­y­sis.

This post won’t say any­thing about how sur­vey re­spon­dents differed from the gen­eral pub­lic. This post only com­pares groups within the EA com­mu­nity to each other. The over­lap of per­son­al­ity trait scores be­tween any two groups I com­pared was always greater than differ­ences be­tween the groups.

(Epistemic notes: Ob­vi­ously, I was hop­ing for some­thing in­ter­est­ing to write about, and tried two un­pro­duc­tive tacks be­fore set­tling on the fol­low­ing statis­ti­cal tests, but I was also pre­pared for an an­swer af­firm­ing the null hy­poth­e­sis, i.e. that there is no cor­re­la­tion be­tween per­son­al­ity traits and cause pri­ori­ti­za­tion, and was fur­ther­more pre­pared to write a blog post with that mes­sage. My con­fi­dence in­ter­val cut­off was the com­mon Schel­ling point of 95% for non-med­i­cal re­search. Out of 55 2-sam­ple t-tests, we would ex­pect 2 to come out “statis­ti­cally sig­nifi­cant” due to ran­dom chance, but I found 10, so we can ex­pect most of these to point to ac­tu­ally mean­ingful differ­ences rep­re­sented in the sur­vey data.)

Within the 2018 sur­vey, re­spon­dents had to as­sign an im­por­tance level to each of 11 po­ten­tial cause ar­eas: an­i­mal welfare, cli­mate change, men­tal health, global poverty, over­all ra­tio­nal­ity, “meta” causes like com­mu­nity build­ing, cause pri­ori­ti­za­tion re­search (i.e. what other cause ar­eas have we not thought of yet?), biose­cu­rity, nu­clear se­cu­rity, AI safety, and a fi­nal catch-all cause area for other, un­speci­fied ex­is­ten­tial risks.

Re­spon­dents had to as­sign one of the fol­low­ing im­por­tance lev­els, in de­scend­ing or­der: “This cause should be the top pri­or­ity,” “This cause should be a near-top pri­or­ity,” “This cause de­serves sig­nifi­cant re­sources but less than the top pri­ori­ties,” “I do not think this is a pri­or­ity, but should re­ceive some re­sources,” and “I do not think any re­sources should be de­voted to this cause.” Re­spon­dents could also se­lect “Not sure” as a re­sponse.

There are many ways to ap­proach this topic that I did not at­tempt. In this anal­y­sis, for each cause area, I pit­ted the group of peo­ple who picked the cause as their top­most pri­or­ity against the group of peo­ple who said the cause should re­ceive zero re­sources. And I found some in­ter­est­ing stuff.

First of all: peo­ple who rate an­i­mal welfare their top pri­or­ity on av­er­age have lower emo­tional sta­bil­ity scores than peo­ple who say no re­sources should be de­voted to an­i­mal welfare, with a p-value of 0.026.


Side­bar: It’s com­mon to think of some of the Big Five per­son­al­ity traits as be­ing straight­for­wardly de­sir­able or un­de­sir­able, but I think it’s the case that the good­ness and bad­ness of e.g. con­scien­tious­ness is ac­tu­ally con­text-de­pen­dent. For fur­ther read­ing, I recom­mend this es­say by Eloise Rosen. In that vein, I hope these find­ings are the topic of EA Global af­ter-party chitchat rather than the ba­sis for ac­rimo­nious ac­cu­sa­tions! This is just for fun, guys.

Mov­ing on!

Peo­ple who think nu­clear se­cu­rity should be the top pri­or­ity on av­er­age rank higher on open­ness than peo­ple who think the cause area of nu­clear se­cu­rity should re­ceive no re­sources (p=.046).


Re­mem­ber my ad­vance pre­dic­tion about the cor­re­la­tion be­tween emo­tional sta­bil­ity and rat­ing men­tal health ei­ther a top­most pri­or­ity or a non-pri­or­ity? Well, that pre­dic­tion wasn’t just wrong, it was hilar­i­ously wrong. Those two groups of peo­ple had statis­ti­cally sig­nifi­cant differ­ences on ex­tro­ver­sion (p=.028), con­scien­tious­ness (.018), and agree­able­ness (.023) - but not emo­tional sta­bil­ity, the trait on which they were the most similar!



Peo­ple who as­sign the high­est pri­or­ity to im­prov­ing ra­tio­nal­ity in so­ciety and peo­ple who say no re­sources should be spent on such a her­culean effort differ on ex­tro­ver­sion (p=.026), emo­tional sta­bil­ity (.048), and open­ness (.017).


Fi­nally, I’m go­ing to leave you all with the most sur­pris­ing-to-me graph, and also the re­sult with the high­est con­fi­dence level, a p-value of 0.009. This graph again com­pares peo­ple who rated cli­mate change their top pri­or­ity to peo­ple who said cli­mate change should re­ceive no re­sources. I ex­pect this graph to gen­er­ate some in­ter­est­ing spec­u­la­tion!

Peo­ple who say cli­mate change should be the top pri­or­ity of the EA move­ment rank higher in con­scien­tious­ness than peo­ple who think no re­sources should be de­voted to the cause.


Each graph shows smoothed prob­a­bil­ity den­si­ties for the given per­son­al­ity score among sur­vey re­spon­dents who an­swered the pri­ori­ti­za­tion ques­tion in the speci­fied way.

Here are all ten of the graphs I cre­ated show­ing be­tween-group differ­ences.

If you’d like to check my work, my code note­book for this lit­tle pro­ject lives here.

Spe­cial thanks to Alexan­der Rapp for his helpful com­ments on an ear­lier draft of this post, and thank you to sev­eral mem­bers of the EA Corner Dis­cord server for en­courag­ing me to host this con­tent here.