I agree that the average college student encountering EA today should focus on issues related to AI safety
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it’s already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That’s not to say it shouldn’t get any attention, but there’s a far better evidenced path from e.g. ‘nuclear bombs or major pandemics cause the fall of civilisation’ than from ‘LLMs cause the fall of civilisation’.
And if you’re sufficiently pessimistic on the doomer narrative, we’re all screwed and there’s likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there’s a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don’t think it’s anywhere near wide enough to justify abandoning all other causes.
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.