One other thing that I just noticed: looking at the list of 80k’s 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.
I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.
One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew—which we’d like to work towards reducing.
Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.
Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your “word of mouth is more skewed toward jobs at EA org than advice in 80K articles” conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I’m grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were ‘in the core of the EA community’ (e.g. working at an EA org themselves vs. a PhD student who’s very into EA but living outside of an EA hub) favored me working at EA organizations. It’s interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people’s actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).
Anyway, here are a few other hypotheses of what might contribute to a skew toward ‘EA jobs’ that’s stronger than what 80K literally recommends:
Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
Convenience: If you’re the kind of person who naturally hears about, say, the Open Phil RA job posting, it’s quite convenient to actually apply there. It costs time, but for many people ‘just time’ as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I’m a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in ‘unfamiliar terrain’. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the ‘familiar’ EA space. I generally felt a bit that given that there’s so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly “go into AI strategy” (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it’s not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there’s something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
Advice for ‘EA jobs’ is more unequivocal, see this comment.
One other thing that I just noticed: looking at the list of 80k’s 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.
I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.
One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew—which we’d like to work towards reducing.
Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.
Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your “word of mouth is more skewed toward jobs at EA org than advice in 80K articles” conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I’m grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were ‘in the core of the EA community’ (e.g. working at an EA org themselves vs. a PhD student who’s very into EA but living outside of an EA hub) favored me working at EA organizations. It’s interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people’s actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).
Anyway, here are a few other hypotheses of what might contribute to a skew toward ‘EA jobs’ that’s stronger than what 80K literally recommends:
Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
Convenience: If you’re the kind of person who naturally hears about, say, the Open Phil RA job posting, it’s quite convenient to actually apply there. It costs time, but for many people ‘just time’ as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I’m a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in ‘unfamiliar terrain’. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the ‘familiar’ EA space. I generally felt a bit that given that there’s so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly “go into AI strategy” (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it’s not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there’s something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
Advice for ‘EA jobs’ is more unequivocal, see this comment.