I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.