[Epistemic status: I appreciate that there are people who’ve thought about the EA talent landscape systematically and have access to more comprehensive information, e.g. perhaps some people at 80K or people doing recruiting for major EA orgs. I would therefore place significantly more weight on their impressions. I’m not one of these people. My thoughts are based on (i) having talked 10-100 hours with other EAs about related things over the last year, mostly in a non-focussed way, (ii) having worked full-time for 2 EA organizations (3 if one counts a 6-week internship), (iii) having hired 1-5 people for various projects at the Effective Altruism Foundation, (iv) having spent about 220h on changing my career last year, see another comment. I first heard of EA around October 2015, and have been involved in the community since April 2016. Most of that time I spent in Berlin, then over last summer and since October in Oxford.]
I echo the impression that several people I’ve talked to—including myself—were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I’m less sure if I disagreed with these people about the actual impact of ‘EA jobs’ vs. the next best option, but it’s at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of ‘EA jobs’. E.g. my own guess is that a machine learning graduate course is competitive with most ‘EA jobs’ one could do well without such an education. [I think this last belief of mine is somewhat unusual and at least some very thoughtful people in EA disagree with me about this.]
I think several people were in fact too optimistic about getting an ‘EA job’. It’s plausible they could have accessed information (e.g. do a Fermi estimate of how many people will apply for a role) that would have made them more pessimistic, but I’m not sure.
I know at least 2 people who unsuccessfully applied to a large number of ‘EA jobs’. (I’m aware there are many more.) I feel confident that they have several highly impressive relevant skills, e.g. because I’ve seen some of their writing and/or their CVs. I’m aware I don’t know the full distribution of their relevant skills, and that the people who made the hiring decisions are in a much better position to make them than I. I’m still left with a subjective sense of “wow, these people are really impressive, and I find it surprising that they could not find a job”. This contributes to (i) me feeling more pressure to perform well in and more doubtful about the counterfactual impact of my current role because I have a visceral sense that ‘the next best candidate would have been about as good as I or better’ / ‘it would in some sense be tragic or unfair if I don’t perform well’ (these aren’t endorsed beliefs, but still affect me) , (ii) me being more reluctant to introduce new people into the EA community because and I don’t want them to make frustrating experiences, (iii) me being worried that some of my friends and other community members will make frustrating experiences [which costs attention and life satisfaction but also sometimes time e.g. when talking with someone about their frustration—as an aside, I’d guess that the burden of emotional labor of the latter kind is disproportionately shouldered by relatively junior women in the community]. (None of these effects are very large. I don’t want to make this sound more dramatic than it is, but overall I think there are non-negligible costs even for someone like me who got one of the competitive jobs.)
I agree that identifying and promoting impactful roles outside of EA orgs may be both helpful for the ‘EA job market’ and impactful independently. I really like that the 80K job board sometimes includes such roles. I wonder if there is a diffusion of responsibility problem where identifying such jobs is no-one’s main goal and therefore doesn’t get done even if it would be valuable. [I also appreciate that this is really hard and costs a lot of time, and what I perceive to be 80K’s strategy on this, i.e. focussing on in-depth exploration of particularly valuable paths such as US AI policy, seems on the right track to me.]
I think communication around this is really hard in general, and something that is particularly tricky for people like me and most EAs that are young and have little experience with similar situations. I also think there are some unavoidable trade-offs between causing frustration and increasing the expected quality of applicants for important roles. I applaud 80K for having listened to concerns around this in the past and having taken steps such as publishing a clarifying article on ‘talent constraints’. I think as a community we can still do better, but I’m optimistic that the relevant actors will be able to do so and certain that they have good intentions. I’ve seen EA leaders have valuable and important conversations around this, but it’s not quite clear to me if anyone in particular ‘owns’ optimizing the EA talent landscape at large, and so again wonder if there is a diffusion of responsibility issue that prevents ‘easy wins’ such as better data/feedback collection from getting done (while also being open to the possibility that ‘optimizing the EA talent landscape’ is too broad or fuzzy for one person to focus on it).
Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
I didn’t think people consistently recommended EA orgs over other options
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.
Some related half-baked thoughts:
[Epistemic status: I appreciate that there are people who’ve thought about the EA talent landscape systematically and have access to more comprehensive information, e.g. perhaps some people at 80K or people doing recruiting for major EA orgs. I would therefore place significantly more weight on their impressions. I’m not one of these people. My thoughts are based on (i) having talked 10-100 hours with other EAs about related things over the last year, mostly in a non-focussed way, (ii) having worked full-time for 2 EA organizations (3 if one counts a 6-week internship), (iii) having hired 1-5 people for various projects at the Effective Altruism Foundation, (iv) having spent about 220h on changing my career last year, see another comment. I first heard of EA around October 2015, and have been involved in the community since April 2016. Most of that time I spent in Berlin, then over last summer and since October in Oxford.]
I echo the impression that several people I’ve talked to—including myself—were or are overly focussed on finding a job at a major EA org. This applies both in terms of time spent and number of applications submitted, and in terms of more fuzzy notions such as how much status or success is associated with roles. I’m less sure if I disagreed with these people about the actual impact of ‘EA jobs’ vs. the next best option, but it’s at least plausible to me that (relative to my own impression) some of them overvalue the relative impact of ‘EA jobs’. E.g. my own guess is that a machine learning graduate course is competitive with most ‘EA jobs’ one could do well without such an education. [I think this last belief of mine is somewhat unusual and at least some very thoughtful people in EA disagree with me about this.]
I think several people were in fact too optimistic about getting an ‘EA job’. It’s plausible they could have accessed information (e.g. do a Fermi estimate of how many people will apply for a role) that would have made them more pessimistic, but I’m not sure.
I know at least 2 people who unsuccessfully applied to a large number of ‘EA jobs’. (I’m aware there are many more.) I feel confident that they have several highly impressive relevant skills, e.g. because I’ve seen some of their writing and/or their CVs. I’m aware I don’t know the full distribution of their relevant skills, and that the people who made the hiring decisions are in a much better position to make them than I. I’m still left with a subjective sense of “wow, these people are really impressive, and I find it surprising that they could not find a job”. This contributes to (i) me feeling more pressure to perform well in and more doubtful about the counterfactual impact of my current role because I have a visceral sense that ‘the next best candidate would have been about as good as I or better’ / ‘it would in some sense be tragic or unfair if I don’t perform well’ (these aren’t endorsed beliefs, but still affect me) , (ii) me being more reluctant to introduce new people into the EA community because and I don’t want them to make frustrating experiences, (iii) me being worried that some of my friends and other community members will make frustrating experiences [which costs attention and life satisfaction but also sometimes time e.g. when talking with someone about their frustration—as an aside, I’d guess that the burden of emotional labor of the latter kind is disproportionately shouldered by relatively junior women in the community]. (None of these effects are very large. I don’t want to make this sound more dramatic than it is, but overall I think there are non-negligible costs even for someone like me who got one of the competitive jobs.)
I agree that identifying and promoting impactful roles outside of EA orgs may be both helpful for the ‘EA job market’ and impactful independently. I really like that the 80K job board sometimes includes such roles. I wonder if there is a diffusion of responsibility problem where identifying such jobs is no-one’s main goal and therefore doesn’t get done even if it would be valuable. [I also appreciate that this is really hard and costs a lot of time, and what I perceive to be 80K’s strategy on this, i.e. focussing on in-depth exploration of particularly valuable paths such as US AI policy, seems on the right track to me.]
I think communication around this is really hard in general, and something that is particularly tricky for people like me and most EAs that are young and have little experience with similar situations. I also think there are some unavoidable trade-offs between causing frustration and increasing the expected quality of applicants for important roles. I applaud 80K for having listened to concerns around this in the past and having taken steps such as publishing a clarifying article on ‘talent constraints’. I think as a community we can still do better, but I’m optimistic that the relevant actors will be able to do so and certain that they have good intentions. I’ve seen EA leaders have valuable and important conversations around this, but it’s not quite clear to me if anyone in particular ‘owns’ optimizing the EA talent landscape at large, and so again wonder if there is a diffusion of responsibility issue that prevents ‘easy wins’ such as better data/feedback collection from getting done (while also being open to the possibility that ‘optimizing the EA talent landscape’ is too broad or fuzzy for one person to focus on it).
Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Two additional possible reasons:
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
[Deleted]
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
FWIW: I think I know of another example along these lines, although only second hand.
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I have quantitative data on that :-)
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
Sounds plausible. E.g. I’m pro “train up as a cybersecurity expert” but I know others have advised against.
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.