I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs