A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
I didn’t think people consistently recommended EA orgs over other options
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
A speculative thought I just had on one possible reason for why some people are overly focussed on EA jobs relative to e.g. the other options you list here:
Identifying one’s highest-impact career option is quite challenging, and there is no way to easily conclusively verify a candidate answer.
Therefore (and for other reasons), many people rely a lot on advice provided by 80K and individual EAs they regard as suitable advisors.
At least within the core of the (longtermist) EA community, almost all sources of advice agree that one of the most competitive jobs at an explicitly EA-motivated org usually is among the top options for people who are a good fit.
However, for most alternatives there is significant disagreement among the most trusted sources of advice on whether these alternatives are competitive (in terms of expected impact) with an ‘EA job’, or indeed good ideas at all. For example, someone who I believe many people consult for career advice discouraged me from ‘train up as a cybersecurity expert’ - an option I had brought up (and according to my own impression still consider an attractive option) -, at least relative to working at an EA org. Similarly, there are significant disagreements about the value of academic degrees, even in machine learning (and a bunch of hard-to-resolve underlying disagreements e.g. about how much ML experience is essential/useful for AI safety and strategy).
As a result, people will often be faced with a distribution of views similar to: ‘Everyone agrees working at <EA org> would be great. Many people think a machine learning PhD would be great, one or two even think it’s better for me specifically, but a significant minority thinks it’s useless. One person was excited about cybersecurity, one person was pessimistic, and most said they couldn’t comment on it.’ Perhaps if all of these opinions had been conveyed with maximal reasoning transparency and one was extremely careful about aggregating the opinions this wouldn’t be a problem. But in practice I think this often means that ‘apply to <EA org>’ seems like the top option, at least in terms of psychological pull.
(Another contributing factor to the large number of applications to EA jobs, perhaps less so for how it affects people, may be that that few EA orgs have a very explicit model of the specific skills they require for their most competitive jobs—at least that’s my impression. As a result, they cannot offer reliable guidance people can use to decide if they’re a good fit apart from applying.)
Two additional possible reasons:
Many people in EA community believe it is easier to get a job at an EA organisation than it really is. People working at EA organisations, sometimes in senior positions, were surprised when they heard I didn’t get an offer (from another organisation). I’d guess around half the organisations I applied to were “surprised about the very strong field of applicants”. Past messaging about talent constraints probably also plays a role. As a result, career advice in the EA community can be overly optimistic, to a point where more than one person seriously encouraged me to apply for the COO position at OpenPhil (a position which went to the person who led the operations for Hillary Clinton’s election campaign(!)). At least a year ago, when I was talking to dozens of people for career advice, I got the impression that it should be comparatively easy to get hired by an EA organisation.
This one is weirdly specific and only a minor point (so this comment should not be misconstrued as “the two main reasons people apply for (too) many positions at EA organisations”). I don’t know if this applies to many people, but I got quite a few heavily personalised invitations to apply for positions. I think I *heavily* over-weighted these as evidence that I would have a good chance in the application process. By now I see these invitations as very weak evidence at best, but when I got my first ones, I thought that means I’m half-way there. This was of course naive (and of course I wouldn’t think it means anything if I get a personal letter from a for-profit company). But I am not alone in that. I recently talked to a friend who said “By the way, I got a job offer now. Well, not really a job offer, but it is really close”. All they had gotten was a *very* personalised, well written invitation to apply. But I would guess quite a few people had gotten one (me included). One easy way for EA organisation to avoid inducing this undue optimism would be to transparently state to how many people they send personalised invitations to.
...
(PS: Your point 1 and 2 applied to me very much, but I didn’t get the impression of points 3-5 being the case (I didn’t think people consistently recommended EA orgs over other options))
Thanks for sharing your comment about personalized invitations, that’s interesting. At Open Phil, almost all our personalized invitations (even to people we already knew well) were only lightly personalized. But perhaps a noticeable fraction of people misperceived that as “high chance you’ll get the job if you apply,” or something. The Open Phil RA hiring committee is discussing this issue now, so thanks for raising it.
[Deleted]
It sounds like this issue is at least fairly straightforward to address: in subsequent rounds OpenPhil could just include a blurb that more explicitly clarifies how many people they’re sending emails to, or something similar.
(I’ll note that this a bit above/beyond what I think they are obligated to. I received an email from Facebook once suggesting I apply to their lengthy application process, and I’m not under any illusions this gave me more than a 5-10% chance of getting the job. But the EA world sort of feels like it’s supposed to be more personal and I think it’s make for better overall information-and-resource-flow to include that sort of metadata)
FWIW: I think I know of another example along these lines, although only second hand.
Interesting, thank you for this data point. My speculation was partly based on recently having talked to people who told me something like “you’re the first one [or one of very few among many] who doesn’t clearly recommend me to choose <EA org> over <some other good option>”. It’s good to know that this isn’t what always happens.
I have quantitative data on that :-)
I asked 10 people for career advice in a semi-structured way (I sent them the same document listing my options and asked them to provide rankings). These were all people I would think rank somewhere between “one of the top cause prioritization experts in the world” to “really, really knowledgeable about EA and very smart”.
6 out of 10 thought that research analyst for OpenPhil would be my best option. But after that, there was much less consensus on the second best option (among my remaining three top options). 3.5 people rated management at an EA organisation highest, 3 rated biosecurity highest, 3.5 rated an MSc in ML (with the aim of doing AI safety research) highest.
Of course, YOU were one of these ten people, so that might explain some of it :-).
I had many more informal discussions, and I didn’t think there was strong consensus either.
(Let me know if you need more data, I have many spreadsheets full of analysis waiting for you ;-) )
Sounds plausible. E.g. I’m pro “train up as a cybersecurity expert” but I know others have advised against.