the ratio of median capabilities researchers to safety researchers that would be beneficial from CB is pretty high (maybe >10:1, not sure), and definitely higher than what leading indicators suggest is produced by field-building at the moment.
What’s your current best-guess for what the leading indicators would suggest?
I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren’t, if anything EA might have more of that effect), so maybe something in the 1:10 − 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?
I don’t really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage).
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
I’m impressed the ratio is that favourable! One note to be careful of is that just because people start of hyped about AI safety doesn’t mean they stay there—there’s a decent chance they will swing to the dark side of capabilities, as we sore with Open AI and probably others as well. Just making the point that the starting ratio might look more favourable than after a few years.
Not worsening the current ratio would be a reasonable first guess, and although it depends a lot on how you define safety researchers, I’d say it’s effectively somewhere around 20:1.
sorry are you saying that the current ratio of capabilities researchers to safety researchers produced by AIS field-building is 20:1, or that the current ratio of the researchers overall is 20:1?
(If the latter, then I think my original question was insufficiently clear and I should probably edit it).
What’s your current best-guess for what the leading indicators would suggest?
I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren’t, if anything EA might have more of that effect), so maybe something in the 1:10 − 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?
I don’t really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage).
I would be surprised if the accurate number is as low as 1:20 or even 1:10. I wish there was more data on this, though it seems a bit difficult to collect since at least for university groups most of the impact (to both capabilities and safety) will occur a few+ years after the students start engaging with the group.
I also think it depends a lot on what the best opportunities available to them are. It would depend heavily on what opportunities to work on AI safety exist in the near future versus on AI capabilities for people with their aptitudes.
I agree with this, eg I think I know specific people who went through AIS CB (tho not the recent uni groups because they are younger and there’s more lag) and either couldn’t or wouldn’t find AIS jobs so ended up working in AI capabilities.
Yeah, same. I know of recent university graduates interested in AI safety who are applying for jobs in AI capabilities alongside AI safety jobs.
It makes me think that what matters more is changing the broader environment to care more about AI existential risk (via better arguments, more safety orgs focused on useful research/policy directions, better resources for existing ML engineers who want to learn about it etc.) rather than specifically convincing individual students to shift to caring about it.
I’ve also heard people doing SERI MATS for example explicitly talk/joke about this, about how they’d have to work in AI capabilities now if they don’t get AI safety jobs
I’m impressed the ratio is that favourable! One note to be careful of is that just because people start of hyped about AI safety doesn’t mean they stay there—there’s a decent chance they will swing to the dark side of capabilities, as we sore with Open AI and probably others as well. Just making the point that the starting ratio might look more favourable than after a few years.
Thanks, this is helpful!
Not worsening the current ratio would be a reasonable first guess, and although it depends a lot on how you define safety researchers, I’d say it’s effectively somewhere around 20:1.
sorry are you saying that the current ratio of capabilities researchers to safety researchers produced by AIS field-building is 20:1, or that the current ratio of the researchers overall is 20:1?
(If the latter, then I think my original question was insufficiently clear and I should probably edit it).
The second one—I’m addressing what ratio would be beneficial, but maybe you wanted to understand what actually is?