I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs