A naive analysis on if EA is Talent constrained
I am deeply grateful to Aaron Gertler who reviewed this article. His comments were very thorough (he didn’t leave any hyperlink unclicked). He managed to question several of my claims. I have updated parts of this post based on his comments. I also want to thank Carrick Flynn, Peter Hurford, EA Applicant, Jon Behar, Ben West, 80,000 hours for helping me directly or indirectly. They provided valuable info in their posts/comments/email which I have used in this article. That said, again, they should not be viewed as endorsing anything in this. All mistakes are mine. All views are mine.
Introduction
I have been using 80,000 Hours (80k) since 2017 and have read almost all their posts, spent weeks after weeks reading them to figure out what I should be doing in life. They seem to have done a ton of research and put out many many posts, for us readers to benefit from.
In the process they have made a lot of claims which are hard and time-consuming to verify as we don’t have the insights, contacts or the data that 80k is exposed to. For example they estimate “an additional person working on the most effective issues will have over 100 times as much impact as an additional person working on a typical issue”. To verify this with one example, I would need estimates from say Open Phil on the impact of an employee. I tried, but they are unable to put effort into it at the moment.
Maybe 80k can be asked for clarification directly? Unfortunately, 80k doesn’t seem approachable other than through coaching[1] (which is only for the stellar). Comments sections seem to be deserted to ask for help, and at the time, I didn’t know of any other sources doing this sort of research and coaching for people[2]. Based on reading 80k for years I formed the impression as shared by fellow EA applicant:
Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constrained...--- EA applicant in the EA Forum
And looking at the 277 karma this post got (the highest of any post on the Forum), it might appear that a “lot of people” share(d) this sentiment that EA orgs could potentially be seriously Talent Constrained (TC).
A few weeks back I stumbled upon some articles in the EA forum and to my surprise it appeared that some EA orgs were suggesting that they were not TC. Until this point I don’t think it occurred to me that 80k’ claims (“EA is TC”) could be wrong or lost in translation or that I should test it. Nevertheless, having seen orgs say otherwise, it felt like a good idea to dig into it at least now.
The following article is my naive investigation on if EA is TC. Before we start going deep into whether EA is TC or not, we must first state the definition clearly.
Definitions
We are going to primarily deal with the term “Talent Constrained” (TC). 80k defines TC in “Why you should work on Talent gaps” (Nov 2015) as,
For some causes, additional money can buy substantial progress. In others, the key bottleneck is finding people with a specific skill set. This second set of causes are more “talent constrained” than “funding constrained”; we say they have a “talent gap”.
So, a cause is TC if finding people with a specific skill set, proves to be difficult. The difficulty I assume is in the lack of those skilled people, and not some process/management constraint[3]. “EA Concepts”, clears this confusion up with a better worded “example”:
Organization A: Has annual funding of $5m, so can fund more staff, and has been actively hiring for a year, but has been unable to find anyone suitable… Organization A is more talent constrained than funding constrained...
In this post, discussions are focused on Orgs that are TC and not Causes that are TC. When I read that AI strategy is TC with the lack of “Disentanglement Research” (DR), I don’t know what to do about it. But if I know FHI and many other orgs are TC in DR, then I could potentially upskill in DR, and close the talent gap. So looking at causes for me, is less helpful, less concrete and is not what I have set out to understand.
Why I think EA is TC
EA has been and is talent constrained, according to surveys based on input from several EA orgs[4]: 2017 survey, 2018 survey, 2019 survey. These surveys were conducted by 80k and CEA. In all the surveys EAs on average claim to be more Talent Constrained than Funding Constrained. For example, in 2019 EA orgs reported feeling more Talent Constrained (3 out of 5 rating) and less Funding Constrained (1 out of 5 rating)[5].
80k doesn’t seem to have changed it’s position on this matter since a while. In 2015, 80k suggested that we should focus on providing talent to the community rather than ETG, in “Why you should focus on talent gaps and not funding gaps”. One of the examples they give is about AI Safety where there are people who are ready to donate even more funds, but think there isn’t enough “talent pool”. More posts such as “Working at EA orgs (June 2017), “The world desperately needs AI strategists (June 2017), “Why operations management is the biggest bottleneck in EA” (March 2018), and High-Impact-Careers (Aug 2018), continue to make the case for EA orgs being TC. Even in their recent post, “Key Ideas” (October 2019)--which is mostly recycled from the 2018 article on High-Impact-Careers—they continue to say that the bottleneck to GPR for example, is researchers and operations people[6].
In Nov 2018, they wrote a post to clarify any misconceptions regarding the understanding of the term TC: “Think twice before talking about Talent gaps”. 80k informs us that EA orgs are not TC in general but are TC by specific skills. Some examples (according to them) being, people capable of Disentanglement Research in Strategy and Policy (FHI, OpenAI, Deepmind), dedicated people in influential government positions etc… This is great, the claim is becoming narrower: EA is TC in specifically X. So what is this X?
Where is the EA specifically TC
There seem to be a list of posts from 80k from which we can gather where EA is specifically TC. They are:
Bottlenecks in top problem profiles (Shaping AI, Working in an EA orgs, GPR etc...)
Posts on priority career paths (High Impact Careers, Key-ideas)
Focused bottleneck posts (for AI strategists and Operations)
The surveys from 2017 to 2019 that informed us that the EA Orgs are TC, provide information on “what sort of talent the EA orgs and EA as a whole would need more of, in the next 5 years?”. This question sounds like a proxy to “Where is EA specifically TC?”. 80k seems to agree with this proxy-approximation of the question as evidenced here[7] and here[8]. The top 7 results (out of 20 or so) are below:
2017 | 2017 (EA) | 2018 | 2018 (EA) | 2019 | 2019 (EA) | |
---|---|---|---|---|---|---|
1 | GR | G&P | Oper. | G&P | GR | G&P |
2 | Good Calib. | Good Calib. | Mngment | Oper. | Oper. | Mngment |
3 | Mngment | Mngment | GR | ML/AI | Mngment | GPR |
4 | Off. mngers | ML/AI | ML/AI | Mngment | ML | Founding |
5 | Oper. | Movt. build | GPR | GPR | Econ/math | Soc. Skill |
6 | Math | GR | Founder | GR | HighEA* | ML/AI |
7 | ML/AI | Oper. | Soc. skill | Founding | GPR | Movt. Build |
* High level overview of EA
*** Government and Policy
For the talents that are unclear[9], I am unable to do anything with them at the moment. For the ones that I have clear examples for, I proceed further.
Another way to arrive at or to supplement this list, is to look at the top problem profiles and check what the bottlenecks are. For example, in the profile on shaping AI (March 2017), we see that 80k calls for people to help in AI Technical research, AI Strategy and Policy, Complimentary roles and, Advocacy and Capacity building. So basically EVERYTHING IN AI except ETG, is TC (it appears). In the problem profile on GPR (July 2018), 80k suggests that they mainly need researchers trained in math, econ, phil etc… Also needed are academic managers and operations staff. A very similar story for working at EA orgs as well.
Is it just me or is EA TC in “general”? Like when researchers, operations people and managers are in shortage at GPR orgs, AI orgs and other EA orgs, then who else is left?
In the post on High Impact Careers (August 2018), 80k suggests the following priority career paths and what they are constrained by:
In brief, we think our list of top problems (AI safety, biorisk, EA, GPR, nuclear security, institutional decision-making) are mainly constrained by research insights, either those that directly solve the issue or insights about policy solutions. --- High Impact Careers
In focused bottleneck posts for Operations and AI Strategy just the title already informs how TC the situation is:
-
The world desperately needs AI strategists (June 2017)
Here, other than the title, I didn’t really understand the “desperate need for AI strategists”. Miles expects that “many” jobs would open up in the “future”.
-
Why operations management is one of the biggest bottlenecks in effective altruism (March 2018)
80k updated this post one year later saying that the post is “somewhat out of date”, and that the job market has changed over the last year. That sounds plausible.
In conclusion, the surveys say GRs, ML/AI people, GPR people and movement building are TC (2019). The problem profiles seem to suggest that GPR and AI are completely TC except for ETG (2017,2018). Whereas the High-impact-careers post says that research insights (good researchers) and policy solutions (good policy people) are the most constrained (2018). It appears that there is some discrepancy between different articles—every article doesn’t seem to say the same thing—but we move on with the key message that all these things listed could be potentially TC. But are they really TC though?
The Evidence
GR in GPR
Researchers in GPR are claimed to be constrained. GR’s also stand on top of the survey lists shown before, for 2019. Yet, Open Phil seems to paint a very different picture. For the recent hiring round by Open Phil (started in Feb 2018 and ended in December 2018) they wanted to hire 5 GRs. They report that more than a 100 strong resumes with missions aligned to that of Open Phil were received. 59 of them were selected after remote work tests and went into an interview. Of this, 17 of them were offered a 3 month trial and 5 selected in the end. “Multiple people” they met in this round are claimed to have potential to excel in roles at Open Phil in the future. Open Phil does not seem to feel that there is a lack of skilled people. It appears that they had plenty to choose from and that they have found suitable candidates.
A similar case is observed with EAF. In EAF’s November 2018 hiring round they wanted to hire 1 GR (for grant evaluation) and 1 operations person. Within just 2 weeks, 66 people applied to this EA org which was in a non-hub[10]. These 66 trickled down to 10 interviews after work tests, 4 were offered trials and 2 were selected in the end. No TC in GR here either.
Would Open Phil like to hire more GRs? For sure, but they don’t have the capability to deploy such a pool of available talent, they say. They seem to be constrained by something else, something not “talent”.
AI Strategy and Policy
Researchers in AI Strategy and Policy are also claimed to be constrained. The surveys echo the same as well. But Carrick from FHI (Sep 2017) suggests that AI Policy implementation and research work is essentially on hold until Disentanglement Research progresses. And that even “extremely talented people” will not be able to contribute directly until then. Similar to Open Phil, institutional capacity to absorb and utilize more researchers in Strategy is constrained, according to Carrick. It must be noted that this is just one persons view on the matter and that a stronger version of evidence for this would be if several AI orgs agreed with Carrick’s view.
Except for the TC in Disentanglement research (DR)---where there seems to be large demand and if you meet the bar, you will get a job—there seems to be no sign of TC in Strategy and Policy, at the moment.
Once DR progresses, there would be a need for “a lot of AI researchers”, Carrick expects. It’s been 2.5 years since the post by Carrick, and as late as Nov 2018, 80k continues to cite Carrick’s article. This seems to suggest that not much might have changed. I have tried requesting Carrick to write a reboot of his initial post and hopefully he can further clarify the TC or lack there of.
Researchers and Management staff in other EA orgs
The co-founder and board member of Rethink Charity seems to suggest that both senior and junior staff for Rethink Charity and Charity Science were not hard to find, aka not TC.
I’ve certainly had no problem finding junior staff for Rethink Priorities, Rethink Charity, or Charity Science (Note: Rethink Priorities is part of Rethink Charity but both are entirely separate from Charity Science)… and so far we’ve been lucky enough to have enough strong senior staff applications that we’re still finding ourselves turning down really strong applicants we would otherwise really love to hire.---Peter Hurford says in the 2019 survey
The Life You Can Save’s Jon Behar, agrees with Peter. He adds that it’s not the lack of talent but the lack of money to add new staff which is the bottleneck for TLYCS.
Charity Entrepreneurship’s incubation program has grown from 140 applications to ~2000 applications for 15-20 positions since last year. It’s plausibly not TC this year atleast.
Conclusion
An org is TC in Talent X, if it is not able to find “skilled” people despite “hiring actively”. So far we have seen that Open Phil, EAF, Rethink Charity, Charity Science, TLYCS and FHI, are able to find the skilled people they need—except for one concrete example of Disentanglement Research in FHI (and possibly similar institutes). Contrary to the claims from 80k, it appears that several orgs are not TC.
I am really upset with 80k. First it was focusing too much on career capital (CC) and positions like management consulting, and now TC. Getting into the TC debate only opened a Pandora’s box of more issues. Recently, I discovered that their discussion on replaceability is plausibly wrong. They have gone back and forth[11] on it in the past and currently have suggested that it depends. They ended up inflating impact associated with people working in EA orgs and have now taken it back. They severely downplayed how competitive it is to get jobs in EA orgs[12]. And there are so many cases[13] of people who feel the same way, not without reason. I traveled with 80k on the CC hype and spent months on identifying positions of “maximum CC”[14]. Then I did a 1 year course of Data Science at Coursera. After that I jumped onto the work-at-an-EA-org-because-TC hype and was just about to upskill in statistics and apply for GR positions because they need me.
So many crucial mistakes that cost people like me and others[13:1] a lot of time, and the world a “lot of” dollars. And when someone requests one of the members of 80k to not just serve the elite and that perhaps maybe invest in a small conversation with the non-elite EAs to save them years of wasting time, there is no reply.
Thus, I find it very hard to trust the claims listed in 80k. And there are so many of those claims in every post and it’s just impractical to verify each one of them. Rather than relying on the interpretation of English[15] and generalization of advice for everyone, I find EA forums a much easier place to get information from, challenge claims and get responses for (quickly). I found most of the evidence against TC including the Pandora’s box of issues, there. A lot of the successful people from the EA world seem approachable there with chats, comments and AMAs. Recently I was able to chat with Ben West, Aaron Gertler, Peter Hurford, Jon Behar, Jeff Kaufman and Stefan Torges. A bigger celebrity list of people can be seen commenting in posts, such as Carrick Flynn and 80k’s very own Rob Wiblin.
Final message
Caution: Just because an org is not TC, it doesn’t mean that you should reject that org.
Why is this debate so important?
Whether an org is TC or not, has implications on the impact made. The true impact you make when a job is TC at an EA org is (much) higher, than when the job is not TC. A junior GR at GiveWell is expected to move 2.4m$ if the job was TC. The same GR is expected to move only 244k in the case that the hired GR is better than the next-best-candidate by 10% (Not TC). Such is the distinction between being TC and not.
The above example assumes no spillover effects. But is that correct? Why is there no spillover? Should I work in EA or not? How much value do people really get out of working at an EA org? What is best path for my aspiring EA career?
Stay tuned...
Footnotes
- ↩︎
I approched with questions on DS and they informed me that they don’t give advice over email. I applied for coaching and didn’t make the cut. What I asked?
Do I gain sufficient skills to migrate to Direct work (say analyst in GiveWell) having worked in Management consulting (M.C) for 5 years?
This is in case things don’t seem to work out towards becoming a partner.
Are there examples of high impact direct workers who came from M.C?
I would like to scan their profile to get a feel of what is possible.
- ↩︎
- ↩︎
If 80k on the other hand suggested that TC included everything that made it hard (such as hiring bottleneck) to find people with specific skillsets then TC is such a misnomer. Joel from EA forum puts it well:
I could be mistaken, but it would seem odd to say you’re “funding constrained” but can’t use more funding at the moment. Whereas we are saying orgs are “talent constrained” but can’t make use of available talent… I feel a “talent bottleneck” implies an insufficient supply of talent/applicants, which doesn’t seem to be the case. I guess it’s more that there’s insufficient talent actually working on the problems, but it’s not a matter of supply, so it’s more of a “hiring bottleneck” or an “organizational capacity bottleneck”.---Joel EA Forum
- ↩︎
2018 survey includes:
80,000 Hours (3), AI Impacts (1), Animal Charity Evaluators (2), Center for Applied Rationality (2), Centre for Effective Altruism (2), Centre for the Study of Existential Risk (1), Berkeley Center for Human-Compatible AI (1), Charity Science: Health (1), DeepMind (1), Foundational Research Institute (2), Future of Humanity Institute (2), GiveWell (1), Global Priorities Institute (2), LessWrong (1), Machine Intelligence Research Institute (1), Open Philanthropy Project (4), OpenAI (1), Rethink Charity (2), Sentience Institute (1), SparkWave (1), and Other (5)
- ↩︎
Funding Constrained
1 = how much things cost is never a practical limiting factor for you; 5 = you are considering shrinking to avoid running out of money
Talent constrained
1 = you could hire many outstanding candidates who want to work at your org if you chose that approach, or had the capacity to absorb them, or had the money; 5 = you can’t get any of the people you need to grow, or you are losing the good people you have
- ↩︎
80k about GPR: “To make this happen, perhaps the biggest need right now is to find more researchers able to make progress on the key questions of the field. There is already enough funding available to hire more people if they could demonstrate potential in the area (though there’s a greater need for funding than with AI safety)”
“Another bottleneck to progress on global priorities research might be operations staff, as discussed earlier, so that’s another option to consider if you want to work on this issue.”
- ↩︎
These positions are both our own assessment and backed up by results of our surveys of community leaders about talent constraints, skill needs and key bottlenecks.”
- ↩︎
What skills are the organizations most short of?
- ↩︎
There are several talents listed in the surveys which I don’t understand. I don’t have any examples for what they could mean. For example, “Communications other than marketing and movement building”, “high level knowledge and enthusiasm about effective altruism” and “broad general knowledge about many relevant topics”. Some of the other “talents” mentioned seem too generalized. When I think of “one-on-one social skills”, it could be referring to anything like policy people talking to politicians, or Career Counselors convincing people to change their career path, or even people in the frontline of fundraising. If the surveyors wanted to inform the community that frontline fundraisers are required with “good social skills” (whatever that means), then exactly that in the survey seems much more beneficial than what they have currently done. Contrast this to talents such as GR or Operations. It is clear what these mean. For GR I can think of researchers at Open Phil or GiveWell. For operations I think of Tara from FHI.
- ↩︎
… following list for SF area: 80khours (SF, Oxford), GiveWell (San Francisco), Open Philanthropy project (San Franscisco), 80khours (Oxford and SF), OpenAI (SF), MIRI (Berkeley), Center for Applied Rationality (Berkeley), AI Impact (Berkeley), Animal Charity Evaluator (Berkeley without any office space), Animal Equality (US UK),
The following for UK area: Center for Study of Existential Risk (Oxford), Future of Humanity Institute (Oxford, UK), Global Priorities Institute (Oxford), Sentience Institute (London), Giving What We Can (Oxford), Founders Pledge (London), Centre for Effective Altruism (Oxford). Against Malaria Foundation (St. Albans UK), Sightsavers (U.K), Founders Pledge (London).
The following for other areas: Evidence Action (Washington DC), Helen Keller International (Washington D.C), Give Directly (NYC), Poverty Action Lab (Cambridge, MA, US), Good Food Institute (Washington D.C, US), Center for Global Development (Washington D.C, US).
So for the EA community it looks like the clustering does happen in UK (Oxford, London) and San Francisco area. These regions are the only regions that host the annual EA conferences, not surprisingly.
- ↩︎
2012: They seem to be suggesting here while talking about doctors, aid workers, campaigners, “That’s because careers that are normally thought to be ethical tend to be extremely competitive. That means that if you don’t take the job, someone else will take your place.”
2014: They went on to suggest that replaceability might not be as important as you might think. In 2017 they seem to continue to promote that idea in “Working at effective altruist organizations”.
In 2019 article on “how replaceable are top candidates in large hiring rounds”, they seem to suggest that it depends on the type of distribution of the candidates (log normal or normal).
- ↩︎
Claim: “If you get involved in the community, and prove your interest and general competence, there’s a decent chance you’ll be able to find a role regardless of your qualifications and experience.”
Example: EA applicant from EA forum.
He applied to 20 jobs. He didn’t get a single job, neither did his friends—with the characteristics as above—get jobs. His profile seems to match the one in the claim.
Note: The claim says “decent chance” and not “for sure” though. I give them that. Although many people seem interpret it differently.
- ↩︎↩︎
Links of posts where people were completely misinformed about how competitive the EA world is (look in the comments as well):
- ↩︎
I am not a big fan of these broad terminologies as they don’t allow ME to act on them. For example, “Best ways to gain Career Capital (CC) are: Work at a growing organisation that has a reputation for high performance; Getting a graduate degree; Working in Tech sector; Taking a data science job; Working in think tanks; Making “good connections”, Having runway etc… ” Literally everything under the sun.
I am unable to act on it. I could in theory pursue everything. I don’t know how to compare which has higher CC and lower CC. The definition says: “CC puts you in a better position to make a difference in the future, including skills, connections, credentials and runway.” When I work in Data Science in a FAANG job do I have higher CC compared to when I work on a computer science degree? I don’t know.
Economists routinely measure the impact of high-school dropout vs high-school diploma vs some years of college but dropout vs undergrad degree vs grad degree, in different fields, using the variable “median weekly earnings” or “lifetime earnings”. So when someone says, “you need a degree to get ahead in life”, I can imagine what they mean $470 weekly wage increase. Whereas when someone says, “Computer Science PhD is good CC”, I am lost. Contrast that to saying “Best ways to gain CC is by looking at earnings”. Then I could look at median earnings for Data Science Faang job vs Phd in computer science in say top 20 university (based on my capability) and get ahead in life.
- ↩︎
“If you get involved in the community, and prove your interest and general competence, there’s a decent chance you’ll be able to find a role regardless of your qualifications and experience.” --- 80k
This seems to imply to me that people like EA applicant should have gotten a job. But he didn’t. I think examples would be much better to understand what they mean. What does decent chance mean?
I’m really sad to hear how upset you are with 80,000 Hours and how you feel it has made it harder rather than easier to find a role in which you can have impact.
It’s a real challenge for us to decide whether to share our views or not publish them until we’re more certain and clear. We hope that by getting more information out there, it will let people make better decisions, but unfortunately we’re going to continue to be uncertain and unable to explain all our evidence, and our views will change over time. It’s useful to hear your feedback that we might be getting the tradeoff wrong. We’ve been trying to do a better job communicating our uncertainty in the new key ideas series, for instance by releasing: advice on how to read our advice
Thank you for collecting together all this specific information about different organisations in EA. The question of whether the issues we focus on are ‘talent constrained’ or not (though I prefer not to use this term), is a complicated one. Unfortunately, I can’t give you a full response here, though I do hope to write about it more in the future.
I do just want to clarify that I do still believe that certain skill bottlenecks are very pressing in effective altruism. Here are a couple of additional points:
To be specific, I think it’s longtermist organisations that are most talent constrained. Global health and factory farming organisations are much more constrained by funding relatively speaking (e.g. GiveWell top recommended charities could absorb ~$100m). I think this explains why organisations like TLYCS, Charity Science and Charity Entrepreneurship say they’re more funding constrained (and also to some extent Rethink priorities, which does a significant fraction of its work in this area).
Even within longtermist and meta organisations, not *every* organisation is mainly skill-constrained, so you can find counterexamples, such as new organisations without much funding. This may also explain the difference between the average survey respondents and Rethink Priorities’ view.
It doesn’t seem to me that looking at whether lots of people applied to a job tells us much about how talent constrained an organisation is. Some successful applicants might have still been much better than others, or the organisations might have preferred to hire even more than they were able to.
Something else I think is relevant to the question of whether our top problem areas are talent constrained is that I think many community members should seek positions in government, academia and other existing institutions. These roles are all ‘talent constrained’, in the sense that hundreds of people could take these positions without the community needing to gain any additional funding. In particular, we think there is room for a significant number of people to take AI policy careers, as argued here.
There’s a lot more I’d like to say about all of these topics. I hope that gives at least a little more sense of how I’m thinking about this. Unfortunately, I’ve been focusing on responding to covid-19 so won’t be able to respond to questions. I want to reiterate though how sad it is to hear that someone has found our advice so unhelpful, not just because of the negative effect on you, but also on those you’re working to help. Thank you for taking the time to tell us, and I hope that we can continue to improve not only our advice, but also the clarity with which we express our degree of certainty in it and evidence for it.
Thank You for acknowledging this post. I very much appreciate your reply.
I really wish you can put more of your evidence out there instead of sentences that are a summary of the evidence you have. “Another bottleneck to progress on GPR might be operations staff” (GPR Key-ideas). Is it a bottleneck or is it not? I don’t know what to make of “might be”. In this case if you presented your evidence that helps conclude this, say in a footnote, I think it will be more useful. People can then draw the conclusion for themselves.
I am glad you clarify about your position that you are focused on longtermism TC. I only know of two cases where longtermism positions are TC. Disentanglement research as informed by Carrick Flynn in Sep 2017 and AI Policy in US in Jan 2019 article). It still stands that Open Phil in GR seems to be not TC. (“The pool of available talent is strong, … more than a hundred applicants had very strong resumes… but … (to) deploy this base of available talent is weak”)
I think what helps is to keep the TC debate focused on to specific cases. And this can be done with providing evidence as done in AI Policy in US.
Claims: Average Survey respondents feel they are TC more than RP because they have less funding needs than RP (and is “new”).
Example: Open Phil is an average survey respondent (I presume). Open Phil has funding. Open Phil does not seem to feel TC in GR though.
It looks like the example does not satisfy the claim. So now I don’t really know what you are talking about. I don’t have one example of an org and a position that is skill-constrained in research in GPR. I keep hearing you saying that “research is the biggest need right now” (key-ideas post) but when I look in Open Phil it doesn’t seem to be so. They are unable to absorb more researchers. So what exactly are you talking about?
You might wonder why I am quoting the same Open Phil example like a parrot. That is because that is one of the few hiring rounds available. And trying to ask companies like FHI or Open Phil etc., for more info on this or dollars moved by researcher or about replaceability does not seem to produce results unfortunately.
The definition for TC is that an org is unable to find “skilled people” despite hiring actively. I agree that number of people applied is not a measure for TC. But the number of people in the last round (after 4 other rounds) seems to suggest something regarding if orgs are able to find skilled people or not. Even if that is not the case --> When you look at what Open Phil says, I can’t imagine that they are TC in GR based on the numbers of people who they thought had good resumes. In fact it seems like a bad idea to push for research at Open Phil (GPR) in GR considering replaceability atleast. And the more I talk to people like Peter Hurford (about replaceability) the more I feel like there is less point in being a GR.
About “successful applicants might have been still much better” (due to the potential log-normal distribution of candidates ability), I would also like one example for a case where this is true. I don’t think that is the case with Open Phil in GR based on their hiring round.
Aaron also raised this point as well. Yes that is definitely a possibility that people would still be hired but the organization would continue to be TC. Seems like a reasonable hypothesis but still needs evidence (one example at least) to support it I think. Nevertheless, I don’t think that is the case with Open Phil in GR based on their hiring round.
AI policy careers in the US seems to match the definition of TC. “80,000 Hours has attended, speakers have lamented the government’s lack of expertise on AI, and noted the substantial demand for such expertise within government. For example, DoD’s new Joint AI Center alone is apparently looking to hire up to 200 people.”. I didn’t know this before. This is so clear for me now, that I have an example for what you mean with “significant number of people”. I wish the same was available for other top problem areas.
Thanks for this.
Thank you very much for taking the time to respond. I very much appreciate it. I would really appreciate more evidence displayed for claims and less generalization with 80khours blogs.
P.S
If you already know many opportunities are high-impact, I expect that you have looked at the value contributed by several people, and factored things like replaceability etc., before you came to a decision. Why not just publish it? Asking companies doesn’t seem practical and no one seems to be giving out such information. One author even suggested that only if I am writing an academic paper he would be able to help otherwise he didn’t find time for it.
Thanks for the reminder of the EA Leaders Forum survey—I’d forgotten about that and was relying on the 2018 80k findings. A couple of minor comments/questions:
Isn’t TC in the movement just the aggregation of TC in relevant orgs and actors? There’s a tradeoff between specificity/concreteness and representativeness/generalisability, and for most purposes, the latter seems more useful to me?
Animal Advocacy Careers will be offering one-to-one advising soon. Before it is officially launched, people can sign up to express their interest here.
Hi Jamie,
Thank You for your comment.
Yes it seems to be. All I wanted was to avoid a level of abstraction. “AI strategy is TC in DR” vs “FHI is TC in DR”. I really feel confused thinking about the former. The later is so concrete. I can test it. I can go in depth in that ONE EXAMPLE. The former is too broad. I find it easier to think in concrete examples.
Interesting! Would you be able to give me a real example to satisfy your claim? I claim that concreteness seems useful to me and if I get an example I hold on to it for dear life and test all claims atleast against that one example.
Claim: Concreteness seems useful.
Example: Consider: “Many community members should seek positions in government, academia, and other existing institutions.”
I am lost. What is “MANY”? What does a “position in government” even look like. All this until I saw this beautiful example: “DoD’s new Joint AI Center alone is apparently looking to hire up to 200 people.”. I understand finally what many and position in government is.
That’s great. I subscribed already. Thank You very much Jamie.
<<Would you be able to give me a real example to satisfy your claim?>>
The difference here is probably whether an individual or an organisation (80k, AAC) is evaluating TC.
If, via some research, you have the ability to either 1) make claims about TC across a movement or range or orgs, with moderate confidence or 2) make claims about TC in one or two orgs, with higher confidence, an individual might opt for (2), as they can focus on orgs they’re more interested in. But 80k/AAC would opt for (1), because the advice is useful to a larger number of people?
<<I am lost. What is “MANY”? What does a “position in government” even look like.>> Given that the ideal distribution of roles and applicants and how this compares to the current situation is only really one consideration among several important considerations that affect career decisions (i.e. it affects your comparative advantage), maybe a high level of precision isn’t that important?
TL;DR
I think we might be on the same page.
I think it is worthwhile to note that in your latest article in the abstract you make a few claims such as: “EAs are struggling to fill fundraising and operations roles”. But you also think it is important and have dedicated a whole article to a bunch of similar claims on bottleneck, showing why you think there is “weak evidence” and explain what the “weak evidence” is.
If you are saying you will make representative statements but provide the evidence you have for it, then this discussion is moot (rendered unimportant by recent events). For me evidence gives a way to understand how “struggling” EAAs are and quickly test it.
Claims: Representatives for
mostcertain purposes seems to be more useful than specificity/concreteness.Example:
Discussion
This doesn’t look like an example that satisfies the claim. Atleast I am unable to see how it is “useful”. Plus there is another claim in the explanation that this type of advice will be useful for a larger number of people. Instead, can you show me one actual “representatives-statement” that satisfies “being more useful” than its “concreteness” alternative. In the previous reply to you I believe I clarify with one example how “concreteness” overpowers “representatives” in being “useful”, when people read it.
And I don’t get what you mean by “ideal situation and current situation is an important consideration for career decisions”.
Are you trying to say that looking at one example might not be useful as it is somehow not precise? and that we should be rather happy with general statements? Do you have an example to show what you mean?
Thanks.
Slight correction: I spent somewhere between 60 and 90 minutes engaging with this post.
I corrected it. Thanks.
Claims that you find to be false? please post evidence as well.
If you find formatting issues please state here:
Do you know of actual TC positions? Can you please cite your source?
I’m currently doing some research for Animal Advocacy Careers on specific skill types in animal advocacy that will be posted in forthcoming “skills profiles.” An example from my draft report on fundraising roles is below. Feedback very welcome! (Obviously this is an unusual case in that its a talent constraint directly relating to funding constraints.)
In our short initial survey and interviews with 12 CEO’s and hiring professionals from 9 of the “top” or “standout” charities currently or formerly recommended by Animal Charity Evaluators, 5 respondents selected “fundraising experience” as one of up to 6 skills (out of 25 options) that their organisation most needed; this was the second most frequently selected option, after “management.”
2 out of 10 respondents to the same survey mentioned fundraising roles as being “the hardest to fill.”
In our “spot-check” [note, this is forthcoming research, which will likely be released within a week] of current roles and advertised roles at 27 animal advocacy nonprofits, fundraising was the skillset that was most notably overrepresented in animal advocacy job adverts (appearing to be important in 17% of identified job ads) relative to the number of current roles in the movement (appearing to be important in 10% of current roles); this may imply that these roles are unusually hard to fill and that fundraising expertise is undersupplied in the community, relative to its needs. As discussed in our blog post on the spot-check, however, this research provides only very weak evidence on the question of what the movement’s greatest bottlenecks are.
There is evidence from a 2013 report that senior fundraisers are difficult to hire in US nonprofits generally. This makes it seem more likely that animal advocacy nonprofits face the same difficulty.
The same report found evidence that smaller nonprofits may struggle to attract the most experienced fundraisers. Given that many animal advocacy organisations have small budgets, this provides another reason to expect that animal advocacy organisations will struggle to hire fundraisers, though this is only very weak evidence that this is a bottleneck for the movement.
This is interesting and I look forward to reading more.
A more negative reading of this information would suggest that the issue may not be lack of fundraising skill within the organizations but rather that many of the interviewed ACE selected charities don’t get the funding they want because most people, or the donors the charities care about, don’t agree with ACE’s or the CEOs’ self-assessments that the charities are worth funding. That is, these folks may not donate for reasons having to do with the organizations not because of lack of relationship building, marketing, etc.
It’s a different sort of concern and suggests a different line of research inquiry, but may be worth keeping in the back of one’s mind.
Thanks for the input! If the above bullet points were evidence of funding constraints, then this “more negative reading” would be a plausible alternative explanation. But I’m not following how the above bullet points could be read in this way. Apologies if I’m missing something.
Are you thinking this applies to all 5 of the above bullet points? Or specific bullet points within that group?
I find this very hard to understand. My understanding is that 17% of “identified job ads” was related to fundraising. I don’t get the next part where you say talk about 10% of the current roles.
I get it that fundraising is “over-represented” in animal advocacy jobs with 17% of job ads talking about it, but what are the percentages for the other skills? Without that I think it is hard to say if 17% is high or not right? or Am I mistaken?
Very interesting report (especially the sample size of 2000 non-profits). Looking at the sample it looks like only 1% of all the 2000 odd organizations was from “philanthropy, volunteerism and Grantmaking”. And highest was human services, educational institutions and arts, culture, humanities. I think it can really skew the results. Your thoughts?
Claims: Smaller nonprofit have fewer
struggle to findmost experienced fundraisersEvidence:
DDs with no experience based on salaries
8% > 50k$
23% < 50k$
This above evidence is confusing me to verify the claim. As it directly doesn’t associate with small non profits but through some association in salary. But the following seems to be causing less confusion.
prospective donor research
24% have no experience for DDs in general
32% have no experience for DDs in small
32.25% have no experience for DDs in non-small (back calculating)
Securing gifts
26% have no experience for DDs in general
38% have no experience for DDs in small
25% have no experience for DDs in non-small (back-calculating)
I am concerned now by the wording “struggling”. This doesn’t seem to be too bad. Smaller nonprofits seem to have fewer people of experienced staff. But are they “struggling”? I am not sure. And as a result this seems like weak evidence for bottleneck claims. Agree. Am I mistaken?
This is useful feedback. I might need to work on the wording.
I don’t think I agree with that—I think the important consideration is the number of identified advertised roles of a particular type relative to the number of identified currently filled roles of the same type. Not the number of advertised roles of type A relative to advertised roles of type B. But FWIW the full report is now published.
I agree its weak evidence; I think it’s the weakest of the 5 bullet points above. I find weak evidence useful.
Now I get what you were trying to say, I think. So you are saying you look at the ratio of “percentage of fundraising in latest job ads” vs “percentage of fundraising in current jobs”. That sounds like a smart proxy. Really interesting.
Thanks.