Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
I didn’t think people consistently recommended EA orgs over other options
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.
Not sure I follow the part about how the kind of thing described in the original post makes you “more reluctant to introduce new people into the EA community.” There are lots of exciting things for EAs to do besides “apply to one or more of the 20 most competitive jobs at explicitly EA-motivated employers,” including “keep doing what you’re doing and engage with EA as an exciting hobby” and “apply to key positions in top-priority cause areas that are on the 80,000 Hours Job Board but aren’t at one of a handful of explicitly EA-motivated orgs” and “do earn to give for a while while gaining skills and then maybe transition to more direct work later or maybe not,” as well as other paths that are specific to particular priority causes, e.g. for AI strategy & policy I’d be excited to see EAs (a) train up in ML, for later work in either AI safety or AI strategy/policy, (b) follow these paths into a US AI policy career (esp. for US citizens, and esp. now that CSET exists), and (c) train up as a cybersecurity expert (I hope to say more later about why this path should be especially exciting for AI-interested EAs; also the worst that happens is that you’ll be in extremely high demand and highly paid).
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Two additional possible reasons:
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
[Deleted]
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
FWIW: I think I know of another example along these lines, although only second hand.
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I have quantitative data on that :-)
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
Sounds plausible. E.g. I’m pro “train up as a cybersecurity expert” but I know others have advised against.
In a nutshell, I’m worried that the people would not find the options you list exciting from their perspective, and instead would perceive not working in one of the 20 most competitive jobs at explicitly EA-motivated employers as some kind of personal shortcoming, hence the frustration.
I think the OP is evidence that his can happen e.g. because the author reports that
Note that I agree with you that in fact “[t]here are lots of exciting things for new EAs” including the options you’ve listed. However, even given this considered belief of mine, I think I was overly focussed on ‘EA jobs’ in a way that negatively affected my well-being.
Even when I consider that my guess is that I’m unusually susceptible to such psychological effects (though not extremely so, my crude guess would be ’80th to 99th percentile’), I’d expect some others to be similarly affected even if they agree—like I—about the impact of less competitive options.
Perhaps with “the kind of thing described in the original post” you meant specifically refer to the issue ‘people spend a lot of time applying for EA jobs’. Certainly a lot of the information in the OP and in one of my comments was about this. In that case I’d like to clarify that it’s not the time cost itself that’s the main cause of effects (i)-(iii) I described in the parent. In fact I somewhat regret to have contributed to the whole discussion perhaps being focused on time costs by providing more data exclusively about this. The core problem as I see it is how the OP, I, and I believe many others, think about and are psychologically affected by the current EA job market and the surrounding messaging. The objective market conditions (e.g. number of applicants for jobs) contribute to this, as do many aspects of messaging by EA orgs and EAs, as do things that have nothing to do with EA at all (e.g. people’s degree of neuroticism and other personality traits). I don’t have a strong view on which of these contributing factors is the best place to intervene.