This topic looks really interesting to me as a mid-career person who would prefer to work on something that improves the world rather than something that is in the end driven by profit. The post gives me the impression that there are lots of interesting research-y jobs out there that actually try to improve the world without requiring a big financial sacrifice and for which people without the exact skills needed for the job might still have a chance.
This raises a big “too good to be true” red flag for me. There are lots of people who would prefer to do research rather than some random commercial job, especially if it was on a topic relevant to improving the world. That is witnessed by the armies of people who would want to do a research job in academia despite the high pressure and mediocre pay. If these jobs are really as good as this post describes I would expect there to be lots of people trying to get in and thus it not being worth the effort for me to try to apply without exactly the right skills.
Hmm the short answer is that the job markets aren’t necessarily efficient, so if it seems too good to be true for you, it might just be a really good option for you!
The longer answer is that the set of tradeoffs that are common in EA work may well sound appealing to you, but it’s not necessarily that appealing to other people. Some quick things that might make EA work less appealing for many people (especially when compared to academia):
The set of possible actions are vast, the subset of optimal actions are tiny.
Most of my EA-adjacent friends in academia do work that they think of as extremely interesting. In contrast, EA work necessarily (at least in theory) filters heavily on impact, and it’s unlikely that the same research questions will be both extremely interesting and extremely impactful.
So from an academic perspective, giving up intellectual freedom to do impactful work is often a huge sacrifice in comparison
On the flip side, if you have the type of psychology that naturally finds (e.g.) corrigibility in AI alignment or timelines for alternative proteins maximally interesting, then this may not look like a sacrifice to you at all!
More realistically, most of us reorient ourselves to make impact itself seem interesting.
It’s harder to get external prestige for doing impactful EA work (though maybe this is changing)
Compared to academia, just isn’t the same system of citations, promotions, etc, that’s as externally legible as some other career tracks like academia or the corporate world.
There’s a lot of responsibility in EA work, and this can be stressful or emotionally hard to deal with.
I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the “improving the world” box if you have fairly unusual views. There aren’t many people in the world for whom e.g. “doing research to prevent future AI systems from killing us all” tracks as an altruistic activity. It’s interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren’t research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).
Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don’t yet exist and therefore can’t be experimented upon. Out of all possible research that could be done in academia, “expected tractability” is a large factor in determining what questions people try to tackle. But when you’re filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it’s much more likely that things will be a confusing muddle that it’s difficult to make clear progress on.
Honestly for me it’s probably at the “almost too good to be true” level of surprisingness (but to be clear it actually is true!). I think it’s a brilliant community / ecosystem (though of course there’s always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it’s a big factor here.
I also agree that not all roles are research roles, although I don’t know how much this weakens the surprisingness because some people probably don’t find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it’s “EA” research.)
I guess there’s also the “downsides” I mentioned in the post. One that particularly comes to mind is that there still aren’t a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).
This topic looks really interesting to me as a mid-career person who would prefer to work on something that improves the world rather than something that is in the end driven by profit. The post gives me the impression that there are lots of interesting research-y jobs out there that actually try to improve the world without requiring a big financial sacrifice and for which people without the exact skills needed for the job might still have a chance.
This raises a big “too good to be true” red flag for me. There are lots of people who would prefer to do research rather than some random commercial job, especially if it was on a topic relevant to improving the world. That is witnessed by the armies of people who would want to do a research job in academia despite the high pressure and mediocre pay. If these jobs are really as good as this post describes I would expect there to be lots of people trying to get in and thus it not being worth the effort for me to try to apply without exactly the right skills.
So, what’s the catch? Or what am I missing here?
Hmm the short answer is that the job markets aren’t necessarily efficient, so if it seems too good to be true for you, it might just be a really good option for you!
The longer answer is that the set of tradeoffs that are common in EA work may well sound appealing to you, but it’s not necessarily that appealing to other people. Some quick things that might make EA work less appealing for many people (especially when compared to academia):
The set of possible actions are vast, the subset of optimal actions are tiny.
Most of my EA-adjacent friends in academia do work that they think of as extremely interesting. In contrast, EA work necessarily (at least in theory) filters heavily on impact, and it’s unlikely that the same research questions will be both extremely interesting and extremely impactful.
So from an academic perspective, giving up intellectual freedom to do impactful work is often a huge sacrifice in comparison
On the flip side, if you have the type of psychology that naturally finds (e.g.) corrigibility in AI alignment or timelines for alternative proteins maximally interesting, then this may not look like a sacrifice to you at all!
More realistically, most of us reorient ourselves to make impact itself seem interesting.
It’s harder to get external prestige for doing impactful EA work (though maybe this is changing)
Compared to academia, just isn’t the same system of citations, promotions, etc, that’s as externally legible as some other career tracks like academia or the corporate world.
There’s a lot of responsibility in EA work, and this can be stressful or emotionally hard to deal with.
I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the “improving the world” box if you have fairly unusual views. There aren’t many people in the world for whom e.g. “doing research to prevent future AI systems from killing us all” tracks as an altruistic activity. It’s interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren’t research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).
Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don’t yet exist and therefore can’t be experimented upon. Out of all possible research that could be done in academia, “expected tractability” is a large factor in determining what questions people try to tackle. But when you’re filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it’s much more likely that things will be a confusing muddle that it’s difficult to make clear progress on.
Some quick thoughts on this from me:
Honestly for me it’s probably at the “almost too good to be true” level of surprisingness (but to be clear it actually is true!). I think it’s a brilliant community / ecosystem (though of course there’s always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it’s a big factor here.
I also agree that not all roles are research roles, although I don’t know how much this weakens the surprisingness because some people probably don’t find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it’s “EA” research.)
I guess there’s also the “downsides” I mentioned in the post. One that particularly comes to mind is that there still aren’t a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).