A naive analysis on if EA is Talent constrained

I am deeply grate­ful to Aaron Gertler who re­viewed this ar­ti­cle. His com­ments were very thor­ough (he didn’t leave any hy­per­link unclicked). He man­aged to ques­tion sev­eral of my claims. I have up­dated parts of this post based on his com­ments. I also want to thank Car­rick Flynn, Peter Hur­ford, EA Ap­pli­cant, Jon Be­har, Ben West, 80,000 hours for helping me di­rectly or in­di­rectly. They pro­vided valuable info in their posts/​com­ments/​email which I have used in this ar­ti­cle. That said, again, they should not be viewed as en­dors­ing any­thing in this. All mis­takes are mine. All views are mine.


I have been us­ing 80,000 Hours (80k) since 2017 and have read al­most all their posts, spent weeks af­ter weeks read­ing them to figure out what I should be do­ing in life. They seem to have done a ton of re­search and put out many many posts, for us read­ers to benefit from.

In the pro­cess they have made a lot of claims which are hard and time-con­sum­ing to ver­ify as we don’t have the in­sights, con­tacts or the data that 80k is ex­posed to. For ex­am­ple they es­ti­mate “an ad­di­tional per­son work­ing on the most effec­tive is­sues will have over 100 times as much im­pact as an ad­di­tional per­son work­ing on a typ­i­cal is­sue”. To ver­ify this with one ex­am­ple, I would need es­ti­mates from say Open Phil on the im­pact of an em­ployee. I tried, but they are un­able to put effort into it at the mo­ment.

Maybe 80k can be asked for clar­ifi­ca­tion di­rectly? Un­for­tu­nately, 80k doesn’t seem ap­proach­able other than through coach­ing[1] (which is only for the stel­lar). Com­ments sec­tions seem to be de­serted to ask for help, and at the time, I didn’t know of any other sources do­ing this sort of re­search and coach­ing for peo­ple[2]. Based on read­ing 80k for years I formed the im­pres­sion as shared by fel­low EA ap­pli­cant:

Hey you! You know, all these ideas that you had about mak­ing the world a bet­ter place, like work­ing for Doc­tors with­out Borders? They prob­a­bly aren’t that great. The long-term fu­ture is what mat­ters. And that is not fund­ing con­strained, so earn­ing to give is kind of off the table as well. But the good news is, we re­ally, re­ally need peo­ple work­ing on these things. We are so tal­ent con­strained...--- EA ap­pli­cant in the EA Forum

And look­ing at the 277 karma this post got (the high­est of any post on the Fo­rum), it might ap­pear that a “lot of peo­ple” share(d) this sen­ti­ment that EA orgs could po­ten­tially be se­ri­ously Ta­lent Con­strained (TC).

A few weeks back I stum­bled upon some ar­ti­cles in the EA fo­rum and to my sur­prise it ap­peared that some EA orgs were sug­gest­ing that they were not TC. Un­til this point I don’t think it oc­curred to me that 80k’ claims (“EA is TC”) could be wrong or lost in trans­la­tion or that I should test it. Nev­er­the­less, hav­ing seen orgs say oth­er­wise, it felt like a good idea to dig into it at least now.

The fol­low­ing ar­ti­cle is my naive in­ves­ti­ga­tion on if EA is TC. Be­fore we start go­ing deep into whether EA is TC or not, we must first state the defi­ni­tion clearly.


We are go­ing to pri­mar­ily deal with the term “Ta­lent Con­strained” (TC). 80k defines TC in “Why you should work on Ta­lent gaps” (Nov 2015) as,

For some causes, ad­di­tional money can buy sub­stan­tial progress. In oth­ers, the key bot­tle­neck is find­ing peo­ple with a spe­cific skill set. This sec­ond set of causes are more “tal­ent con­strained” than “fund­ing con­strained”; we say they have a “tal­ent gap”.

So, a cause is TC if find­ing peo­ple with a spe­cific skill set, proves to be difficult. The difficulty I as­sume is in the lack of those skil­led peo­ple, and not some pro­cess/​man­age­ment con­straint[3]. “EA Con­cepts”, clears this con­fu­sion up with a bet­ter worded “ex­am­ple”:

Or­ga­ni­za­tion A: Has an­nual fund­ing of $5m, so can fund more staff, and has been ac­tively hiring for a year, but has been un­able to find any­one suit­able… Or­ga­ni­za­tion A is more tal­ent con­strained than fund­ing con­strained...

In this post, dis­cus­sions are fo­cused on Orgs that are TC and not Causes that are TC. When I read that AI strat­egy is TC with the lack of “Disen­tan­gle­ment Re­search” (DR), I don’t know what to do about it. But if I know FHI and many other orgs are TC in DR, then I could po­ten­tially up­skill in DR, and close the tal­ent gap. So look­ing at causes for me, is less helpful, less con­crete and is not what I have set out to un­der­stand.

Why I think EA is TC

EA has been and is tal­ent con­strained, ac­cord­ing to sur­veys based on in­put from sev­eral EA orgs[4]: 2017 sur­vey, 2018 sur­vey, 2019 sur­vey. Th­ese sur­veys were con­ducted by 80k and CEA. In all the sur­veys EAs on av­er­age claim to be more Ta­lent Con­strained than Fund­ing Con­strained. For ex­am­ple, in 2019 EA orgs re­ported feel­ing more Ta­lent Con­strained (3 out of 5 rat­ing) and less Fund­ing Con­strained (1 out of 5 rat­ing)[5].

80k doesn’t seem to have changed it’s po­si­tion on this mat­ter since a while. In 2015, 80k sug­gested that we should fo­cus on pro­vid­ing tal­ent to the com­mu­nity rather than ETG, in “Why you should fo­cus on tal­ent gaps and not fund­ing gaps”. One of the ex­am­ples they give is about AI Safety where there are peo­ple who are ready to donate even more funds, but think there isn’t enough “tal­ent pool”. More posts such as “Work­ing at EA orgs (June 2017), “The world des­per­ately needs AI strate­gists (June 2017), “Why op­er­a­tions man­age­ment is the biggest bot­tle­neck in EA” (March 2018), and High-Im­pact-Ca­reers (Aug 2018), con­tinue to make the case for EA orgs be­ing TC. Even in their re­cent post, “Key Ideas” (Oc­to­ber 2019)--which is mostly re­cy­cled from the 2018 ar­ti­cle on High-Im­pact-Ca­reers—they con­tinue to say that the bot­tle­neck to GPR for ex­am­ple, is re­searchers and op­er­a­tions peo­ple[6].

In Nov 2018, they wrote a post to clar­ify any mis­con­cep­tions re­gard­ing the un­der­stand­ing of the term TC: “Think twice be­fore talk­ing about Ta­lent gaps”. 80k in­forms us that EA orgs are not TC in gen­eral but are TC by spe­cific skills. Some ex­am­ples (ac­cord­ing to them) be­ing, peo­ple ca­pa­ble of Disen­tan­gle­ment Re­search in Strat­egy and Policy (FHI, OpenAI, Deep­mind), ded­i­cated peo­ple in in­fluen­tial gov­ern­ment po­si­tions etc… This is great, the claim is be­com­ing nar­rower: EA is TC in speci­fi­cally X. So what is this X?

Where is the EA speci­fi­cally TC

There seem to be a list of posts from 80k from which we can gather where EA is speci­fi­cally TC. They are:

The sur­veys from 2017 to 2019 that in­formed us that the EA Orgs are TC, provide in­for­ma­tion on “what sort of tal­ent the EA orgs and EA as a whole would need more of, in the next 5 years?”. This ques­tion sounds like a proxy to “Where is EA speci­fi­cally TC?”. 80k seems to agree with this proxy-ap­prox­i­ma­tion of the ques­tion as ev­i­denced here[7] and here[8]. The top 7 re­sults (out of 20 or so) are be­low:

2017 2017 (EA) 2018 2018 (EA) 2019 2019 (EA)
1 GR G&P Oper. G&P GR G&P
2 Good Calib. Good Calib. Mng­ment Oper. Oper. Mng­ment
3 Mng­ment Mng­ment GR ML/​AI Mng­ment GPR
4 Off. mngers ML/​AI ML/​AI Mng­ment ML Found­ing
5 Oper. Movt. build GPR GPR Econ/​math Soc. Skill
6 Math GR Founder GR HighEA* ML/​AI
7 ML/​AI Oper. Soc. skill Found­ing GPR Movt. Build

* High level overview of EA
*** Govern­ment and Policy

For the tal­ents that are un­clear[9], I am un­able to do any­thing with them at the mo­ment. For the ones that I have clear ex­am­ples for, I pro­ceed fur­ther.

Another way to ar­rive at or to sup­ple­ment this list, is to look at the top prob­lem pro­files and check what the bot­tle­necks are. For ex­am­ple, in the pro­file on shap­ing AI (March 2017), we see that 80k calls for peo­ple to help in AI Tech­ni­cal re­search, AI Strat­egy and Policy, Com­pli­men­tary roles and, Ad­vo­cacy and Ca­pac­ity build­ing. So ba­si­cally EVERYTHING IN AI ex­cept ETG, is TC (it ap­pears). In the prob­lem pro­file on GPR (July 2018), 80k sug­gests that they mainly need re­searchers trained in math, econ, phil etc… Also needed are aca­demic man­agers and op­er­a­tions staff. A very similar story for work­ing at EA orgs as well.

Is it just me or is EA TC in “gen­eral”? Like when re­searchers, op­er­a­tions peo­ple and man­agers are in short­age at GPR orgs, AI orgs and other EA orgs, then who else is left?

In the post on High Im­pact Ca­reers (Au­gust 2018), 80k sug­gests the fol­low­ing pri­or­ity ca­reer paths and what they are con­strained by:

In brief, we think our list of top prob­lems (AI safety, biorisk, EA, GPR, nu­clear se­cu­rity, in­sti­tu­tional de­ci­sion-mak­ing) are mainly con­strained by re­search in­sights, ei­ther those that di­rectly solve the is­sue or in­sights about policy solu­tions. --- High Im­pact Careers

In fo­cused bot­tle­neck posts for Oper­a­tions and AI Strat­egy just the ti­tle already in­forms how TC the situ­a­tion is:

In con­clu­sion, the sur­veys say GRs, ML/​AI peo­ple, GPR peo­ple and move­ment build­ing are TC (2019). The prob­lem pro­files seem to sug­gest that GPR and AI are com­pletely TC ex­cept for ETG (2017,2018). Whereas the High-im­pact-ca­reers post says that re­search in­sights (good re­searchers) and policy solu­tions (good policy peo­ple) are the most con­strained (2018). It ap­pears that there is some dis­crep­ancy be­tween differ­ent ar­ti­cles—ev­ery ar­ti­cle doesn’t seem to say the same thing—but we move on with the key mes­sage that all these things listed could be po­ten­tially TC. But are they re­ally TC though?

The Evidence


Re­searchers in GPR are claimed to be con­strained. GR’s also stand on top of the sur­vey lists shown be­fore, for 2019. Yet, Open Phil seems to paint a very differ­ent pic­ture. For the re­cent hiring round by Open Phil (started in Feb 2018 and ended in De­cem­ber 2018) they wanted to hire 5 GRs. They re­port that more than a 100 strong re­sumes with mis­sions al­igned to that of Open Phil were re­ceived. 59 of them were se­lected af­ter re­mote work tests and went into an in­ter­view. Of this, 17 of them were offered a 3 month trial and 5 se­lected in the end. “Mul­ti­ple peo­ple” they met in this round are claimed to have po­ten­tial to ex­cel in roles at Open Phil in the fu­ture. Open Phil does not seem to feel that there is a lack of skil­led peo­ple. It ap­pears that they had plenty to choose from and that they have found suit­able can­di­dates.

A similar case is ob­served with EAF. In EAF’s Novem­ber 2018 hiring round they wanted to hire 1 GR (for grant eval­u­a­tion) and 1 op­er­a­tions per­son. Within just 2 weeks, 66 peo­ple ap­plied to this EA org which was in a non-hub[10]. Th­ese 66 trick­led down to 10 in­ter­views af­ter work tests, 4 were offered tri­als and 2 were se­lected in the end. No TC in GR here ei­ther.

Would Open Phil like to hire more GRs? For sure, but they don’t have the ca­pa­bil­ity to de­ploy such a pool of available tal­ent, they say. They seem to be con­strained by some­thing else, some­thing not “tal­ent”.

AI Strat­egy and Policy

Re­searchers in AI Strat­egy and Policy are also claimed to be con­strained. The sur­veys echo the same as well. But Car­rick from FHI (Sep 2017) sug­gests that AI Policy im­ple­men­ta­tion and re­search work is es­sen­tially on hold un­til Disen­tan­gle­ment Re­search pro­gresses. And that even “ex­tremely tal­ented peo­ple” will not be able to con­tribute di­rectly un­til then. Similar to Open Phil, in­sti­tu­tional ca­pac­ity to ab­sorb and uti­lize more re­searchers in Strat­egy is con­strained, ac­cord­ing to Car­rick. It must be noted that this is just one per­sons view on the mat­ter and that a stronger ver­sion of ev­i­dence for this would be if sev­eral AI orgs agreed with Car­rick’s view.

Ex­cept for the TC in Disen­tan­gle­ment re­search (DR)---where there seems to be large de­mand and if you meet the bar, you will get a job—there seems to be no sign of TC in Strat­egy and Policy, at the mo­ment.

Once DR pro­gresses, there would be a need for “a lot of AI re­searchers”, Car­rick ex­pects. It’s been 2.5 years since the post by Car­rick, and as late as Nov 2018, 80k con­tinues to cite Car­rick’s ar­ti­cle. This seems to sug­gest that not much might have changed. I have tried re­quest­ing Car­rick to write a re­boot of his ini­tial post and hope­fully he can fur­ther clar­ify the TC or lack there of.

Re­searchers and Man­age­ment staff in other EA orgs

The co-founder and board mem­ber of Re­think Char­ity seems to sug­gest that both se­nior and ju­nior staff for Re­think Char­ity and Char­ity Science were not hard to find, aka not TC.

I’ve cer­tainly had no prob­lem find­ing ju­nior staff for Re­think Pri­ori­ties, Re­think Char­ity, or Char­ity Science (Note: Re­think Pri­ori­ties is part of Re­think Char­ity but both are en­tirely sep­a­rate from Char­ity Science)… and so far we’ve been lucky enough to have enough strong se­nior staff ap­pli­ca­tions that we’re still find­ing our­selves turn­ing down re­ally strong ap­pli­cants we would oth­er­wise re­ally love to hire.---Peter Hur­ford says in the 2019 survey

The Life You Can Save’s Jon Be­har, agrees with Peter. He adds that it’s not the lack of tal­ent but the lack of money to add new staff which is the bot­tle­neck for TLYCS.

Char­ity En­trepreneur­ship’s in­cu­ba­tion pro­gram has grown from 140 ap­pli­ca­tions to ~2000 ap­pli­ca­tions for 15-20 po­si­tions since last year. It’s plau­si­bly not TC this year atleast.


An org is TC in Ta­lent X, if it is not able to find “skil­led” peo­ple de­spite “hiring ac­tively”. So far we have seen that Open Phil, EAF, Re­think Char­ity, Char­ity Science, TLYCS and FHI, are able to find the skil­led peo­ple they need—ex­cept for one con­crete ex­am­ple of Disen­tan­gle­ment Re­search in FHI (and pos­si­bly similar in­sti­tutes). Con­trary to the claims from 80k, it ap­pears that sev­eral orgs are not TC.

I am re­ally up­set with 80k. First it was fo­cus­ing too much on ca­reer cap­i­tal (CC) and po­si­tions like man­age­ment con­sult­ing, and now TC. Get­ting into the TC de­bate only opened a Pan­dora’s box of more is­sues. Re­cently, I dis­cov­ered that their dis­cus­sion on re­place­abil­ity is plau­si­bly wrong. They have gone back and forth[11] on it in the past and cur­rently have sug­gested that it de­pends. They ended up in­flat­ing im­pact as­so­ci­ated with peo­ple work­ing in EA orgs and have now taken it back. They severely down­played how com­pet­i­tive it is to get jobs in EA orgs[12]. And there are so many cases[13] of peo­ple who feel the same way, not with­out rea­son. I trav­eled with 80k on the CC hype and spent months on iden­ti­fy­ing po­si­tions of “max­i­mum CC”[14]. Then I did a 1 year course of Data Science at Coursera. After that I jumped onto the work-at-an-EA-org-be­cause-TC hype and was just about to up­skill in statis­tics and ap­ply for GR po­si­tions be­cause they need me.

So many cru­cial mis­takes that cost peo­ple like me and oth­ers[13:1] a lot of time, and the world a “lot of” dol­lars. And when some­one re­quests one of the mem­bers of 80k to not just serve the elite and that per­haps maybe in­vest in a small con­ver­sa­tion with the non-elite EAs to save them years of wast­ing time, there is no re­ply.

Thus, I find it very hard to trust the claims listed in 80k. And there are so many of those claims in ev­ery post and it’s just im­prac­ti­cal to ver­ify each one of them. Rather than rely­ing on the in­ter­pre­ta­tion of English[15] and gen­er­al­iza­tion of ad­vice for ev­ery­one, I find EA fo­rums a much eas­ier place to get in­for­ma­tion from, challenge claims and get re­sponses for (quickly). I found most of the ev­i­dence against TC in­clud­ing the Pan­dora’s box of is­sues, there. A lot of the suc­cess­ful peo­ple from the EA world seem ap­proach­able there with chats, com­ments and AMAs. Re­cently I was able to chat with Ben West, Aaron Gertler, Peter Hur­ford, Jon Be­har, Jeff Kauf­man and Ste­fan Torges. A big­ger celebrity list of peo­ple can be seen com­ment­ing in posts, such as Car­rick Flynn and 80k’s very own Rob Wiblin.

Fi­nal message

Cau­tion: Just be­cause an org is not TC, it doesn’t mean that you should re­ject that org.

Why is this de­bate so im­por­tant?

Whether an org is TC or not, has im­pli­ca­tions on the im­pact made. The true im­pact you make when a job is TC at an EA org is (much) higher, than when the job is not TC. A ju­nior GR at GiveWell is ex­pected to move 2.4m$ if the job was TC. The same GR is ex­pected to move only 244k in the case that the hired GR is bet­ter than the next-best-can­di­date by 10% (Not TC). Such is the dis­tinc­tion be­tween be­ing TC and not.

The above ex­am­ple as­sumes no spillover effects. But is that cor­rect? Why is there no spillover? Should I work in EA or not? How much value do peo­ple re­ally get out of work­ing at an EA org? What is best path for my as­piring EA ca­reer?

Stay tuned...


  1. I ap­proched with ques­tions on DS and they in­formed me that they don’t give ad­vice over email. I ap­plied for coach­ing and didn’t make the cut. What I asked?

    • Do I gain suffi­cient skills to mi­grate to Direct work (say an­a­lyst in GiveWell) hav­ing worked in Man­age­ment con­sult­ing (M.C) for 5 years?

    This is in case things don’t seem to work out to­wards be­com­ing a part­ner.

    • Are there ex­am­ples of high im­pact di­rect work­ers who came from M.C?

    I would like to scan their pro­file to get a feel of what is pos­si­ble. ↩︎

  2. EAF seems to offer ca­reer coach­ing here.

    EAF’s op­er­a­tions an­a­lyst is also do­ing coach­ing here. ↩︎

  3. If 80k on the other hand sug­gested that TC in­cluded ev­ery­thing that made it hard (such as hiring bot­tle­neck) to find peo­ple with spe­cific skil­lsets then TC is such a mis­nomer. Joel from EA fo­rum puts it well:

    I could be mis­taken, but it would seem odd to say you’re “fund­ing con­strained” but can’t use more fund­ing at the mo­ment. Whereas we are say­ing orgs are “tal­ent con­strained” but can’t make use of available tal­ent… I feel a “tal­ent bot­tle­neck” im­plies an in­suffi­cient sup­ply of tal­ent/​ap­pli­cants, which doesn’t seem to be the case. I guess it’s more that there’s in­suffi­cient tal­ent ac­tu­ally work­ing on the prob­lems, but it’s not a mat­ter of sup­ply, so it’s more of a “hiring bot­tle­neck” or an “or­ga­ni­za­tional ca­pac­ity bot­tle­neck”.---Joel EA Forum

  4. 2018 sur­vey in­cludes:

    80,000 Hours (3), AI Im­pacts (1), An­i­mal Char­ity Eval­u­a­tors (2), Cen­ter for Ap­plied Ra­tion­al­ity (2), Cen­tre for Effec­tive Altru­ism (2), Cen­tre for the Study of Ex­is­ten­tial Risk (1), Berkeley Cen­ter for Hu­man-Com­pat­i­ble AI (1), Char­ity Science: Health (1), Deep­Mind (1), Foun­da­tional Re­search In­sti­tute (2), Fu­ture of Hu­man­ity In­sti­tute (2), GiveWell (1), Global Pri­ori­ties In­sti­tute (2), LessWrong (1), Ma­chine In­tel­li­gence Re­search In­sti­tute (1), Open Philan­thropy Pro­ject (4), OpenAI (1), Re­think Char­ity (2), Sen­tience In­sti­tute (1), SparkWave (1), and Other (5)

  5. Fund­ing Constrained

    1 = how much things cost is never a prac­ti­cal limit­ing fac­tor for you; 5 = you are con­sid­er­ing shrink­ing to avoid run­ning out of money

    Ta­lent constrained

    1 = you could hire many out­stand­ing can­di­dates who want to work at your org if you chose that ap­proach, or had the ca­pac­ity to ab­sorb them, or had the money; 5 = you can’t get any of the peo­ple you need to grow, or you are los­ing the good peo­ple you have

  6. 80k about GPR: “To make this hap­pen, per­haps the biggest need right now is to find more re­searchers able to make progress on the key ques­tions of the field. There is already enough fund­ing available to hire more peo­ple if they could demon­strate po­ten­tial in the area (though there’s a greater need for fund­ing than with AI safety)”

    “Another bot­tle­neck to progress on global pri­ori­ties re­search might be op­er­a­tions staff, as dis­cussed ear­lier, so that’s an­other op­tion to con­sider if you want to work on this is­sue.” ↩︎

  7. Th­ese po­si­tions are both our own as­sess­ment and backed up by re­sults of our sur­veys of com­mu­nity lead­ers about tal­ent con­straints, skill needs and key bot­tle­necks.”


  8. What skills are the or­ga­ni­za­tions most short of?


  9. There are sev­eral tal­ents listed in the sur­veys which I don’t un­der­stand. I don’t have any ex­am­ples for what they could mean. For ex­am­ple, “Com­mu­ni­ca­tions other than mar­ket­ing and move­ment build­ing”, “high level knowl­edge and en­thu­si­asm about effec­tive al­tru­ism” and “broad gen­eral knowl­edge about many rele­vant top­ics”. Some of the other “tal­ents” men­tioned seem too gen­er­al­ized. When I think of “one-on-one so­cial skills”, it could be refer­ring to any­thing like policy peo­ple talk­ing to poli­ti­ci­ans, or Ca­reer Coun­selors con­vinc­ing peo­ple to change their ca­reer path, or even peo­ple in the frontline of fundrais­ing. If the sur­vey­ors wanted to in­form the com­mu­nity that frontline fundraisers are re­quired with “good so­cial skills” (what­ever that means), then ex­actly that in the sur­vey seems much more benefi­cial than what they have cur­rently done. Con­trast this to tal­ents such as GR or Oper­a­tions. It is clear what these mean. For GR I can think of re­searchers at Open Phil or GiveWell. For op­er­a­tions I think of Tara from FHI. ↩︎

  10. … fol­low­ing list for SF area: 80khours (SF, Oxford), GiveWell (San Fran­cisco), Open Philan­thropy pro­ject (San Fran­scisco), 80khours (Oxford and SF), OpenAI (SF), MIRI (Berkeley), Cen­ter for Ap­plied Ra­tion­al­ity (Berkeley), AI Im­pact (Berkeley), An­i­mal Char­ity Eval­u­a­tor (Berkeley with­out any office space), An­i­mal Equal­ity (US UK),

    The fol­low­ing for UK area: Cen­ter for Study of Ex­is­ten­tial Risk (Oxford), Fu­ture of Hu­man­ity In­sti­tute (Oxford, UK), Global Pri­ori­ties In­sti­tute (Oxford), Sen­tience In­sti­tute (Lon­don), Giv­ing What We Can (Oxford), Founders Pledge (Lon­don), Cen­tre for Effec­tive Altru­ism (Oxford). Against Malaria Foun­da­tion (St. Albans UK), Sight­savers (U.K), Founders Pledge (Lon­don).

    The fol­low­ing for other ar­eas: Ev­i­dence Ac­tion (Wash­ing­ton DC), He­len Kel­ler In­ter­na­tional (Wash­ing­ton D.C), Give Directly (NYC), Poverty Ac­tion Lab (Cam­bridge, MA, US), Good Food In­sti­tute (Wash­ing­ton D.C, US), Cen­ter for Global Devel­op­ment (Wash­ing­ton D.C, US).

    So for the EA com­mu­nity it looks like the clus­ter­ing does hap­pen in UK (Oxford, Lon­don) and San Fran­cisco area. Th­ese re­gions are the only re­gions that host the an­nual EA con­fer­ences, not sur­pris­ingly.


  11. 2012: They seem to be sug­gest­ing here while talk­ing about doc­tors, aid work­ers, cam­paign­ers, “That’s be­cause ca­reers that are nor­mally thought to be eth­i­cal tend to be ex­tremely com­pet­i­tive. That means that if you don’t take the job, some­one else will take your place.”

    2014: They went on to sug­gest that re­place­abil­ity might not be as im­por­tant as you might think. In 2017 they seem to con­tinue to pro­mote that idea in “Work­ing at effec­tive al­tru­ist or­ga­ni­za­tions”.

    In 2019 ar­ti­cle on “how re­place­able are top can­di­dates in large hiring rounds”, they seem to sug­gest that it de­pends on the type of dis­tri­bu­tion of the can­di­dates (log nor­mal or nor­mal). ↩︎

  12. Claim: “If you get in­volved in the com­mu­nity, and prove your in­ter­est and gen­eral com­pe­tence, there’s a de­cent chance you’ll be able to find a role re­gard­less of your qual­ifi­ca­tions and ex­pe­rience.”

    Ex­am­ple: EA ap­pli­cant from EA fo­rum.

    He ap­plied to 20 jobs. He didn’t get a sin­gle job, nei­ther did his friends—with the char­ac­ter­is­tics as above—get jobs. His pro­file seems to match the one in the claim.

    Note: The claim says “de­cent chance” and not “for sure” though. I give them that. Although many peo­ple seem in­ter­pret it differ­ently. ↩︎

  13. Links of posts where peo­ple were com­pletely mis­in­formed about how com­pet­i­tive the EA world is (look in the com­ments as well):

    1. https://​​fo­rum.effec­tivealtru­ism.org/​​posts/​​jmbP9rwXncfa32seH/​​af­ter-one-year-of-ap­ply­ing-for-ea-jobs-it-is-re­ally-really

    2. https://​​www.face­book.com/​​groups/​​473795076132698/​​perma­l­ink/​​1077231712455695/​​

    3. https://​​phys­ticuffs.tum­blr.com/​​post/​​183108805284/​​slat­es­tarscratch­pad-this-post-is-vent­ing-it

    ↩︎ ↩︎
  14. I am not a big fan of these broad ter­minolo­gies as they don’t al­low ME to act on them. For ex­am­ple, “Best ways to gain Ca­reer Cap­i­tal (CC) are: Work at a grow­ing or­gani­sa­tion that has a rep­u­ta­tion for high perfor­mance; Get­ting a grad­u­ate de­gree; Work­ing in Tech sec­tor; Tak­ing a data sci­ence job; Work­ing in think tanks; Mak­ing “good con­nec­tions”, Hav­ing run­way etc… ” Liter­ally ev­ery­thing un­der the sun.

    I am un­able to act on it. I could in the­ory pur­sue ev­ery­thing. I don’t know how to com­pare which has higher CC and lower CC. The defi­ni­tion says: “CC puts you in a bet­ter po­si­tion to make a differ­ence in the fu­ture, in­clud­ing skills, con­nec­tions, cre­den­tials and run­way.” When I work in Data Science in a FAANG job do I have higher CC com­pared to when I work on a com­puter sci­ence de­gree? I don’t know.

    Economists rou­tinely mea­sure the im­pact of high-school dropout vs high-school diploma vs some years of col­lege but dropout vs un­der­grad de­gree vs grad de­gree, in differ­ent fields, us­ing the vari­able “me­dian weekly earn­ings” or “life­time earn­ings”. So when some­one says, “you need a de­gree to get ahead in life”, I can imag­ine what they mean $470 weekly wage in­crease. Whereas when some­one says, “Com­puter Science PhD is good CC”, I am lost. Con­trast that to say­ing “Best ways to gain CC is by look­ing at earn­ings”. Then I could look at me­dian earn­ings for Data Science Faang job vs Phd in com­puter sci­ence in say top 20 uni­ver­sity (based on my ca­pa­bil­ity) and get ahead in life. ↩︎

  15. “If you get in­volved in the com­mu­nity, and prove your in­ter­est and gen­eral com­pe­tence, there’s a de­cent chance you’ll be able to find a role re­gard­less of your qual­ifi­ca­tions and ex­pe­rience.” --- 80k

    This seems to im­ply to me that peo­ple like EA ap­pli­cant should have got­ten a job. But he didn’t. I think ex­am­ples would be much bet­ter to un­der­stand what they mean. What does de­cent chance mean? ↩︎