I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the “improving the world” box if you have fairly unusual views. There aren’t many people in the world for whom e.g. “doing research to prevent future AI systems from killing us all” tracks as an altruistic activity. It’s interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren’t research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).
Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don’t yet exist and therefore can’t be experimented upon. Out of all possible research that could be done in academia, “expected tractability” is a large factor in determining what questions people try to tackle. But when you’re filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it’s much more likely that things will be a confusing muddle that it’s difficult to make clear progress on.
Honestly for me it’s probably at the “almost too good to be true” level of surprisingness (but to be clear it actually is true!). I think it’s a brilliant community / ecosystem (though of course there’s always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it’s a big factor here.
I also agree that not all roles are research roles, although I don’t know how much this weakens the surprisingness because some people probably don’t find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it’s “EA” research.)
I guess there’s also the “downsides” I mentioned in the post. One that particularly comes to mind is that there still aren’t a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).
I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the “improving the world” box if you have fairly unusual views. There aren’t many people in the world for whom e.g. “doing research to prevent future AI systems from killing us all” tracks as an altruistic activity. It’s interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren’t research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).
Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don’t yet exist and therefore can’t be experimented upon. Out of all possible research that could be done in academia, “expected tractability” is a large factor in determining what questions people try to tackle. But when you’re filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it’s much more likely that things will be a confusing muddle that it’s difficult to make clear progress on.
Some quick thoughts on this from me:
Honestly for me it’s probably at the “almost too good to be true” level of surprisingness (but to be clear it actually is true!). I think it’s a brilliant community / ecosystem (though of course there’s always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it’s a big factor here.
I also agree that not all roles are research roles, although I don’t know how much this weakens the surprisingness because some people probably don’t find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it’s “EA” research.)
I guess there’s also the “downsides” I mentioned in the post. One that particularly comes to mind is that there still aren’t a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).