Thank you for doing this! Highly helpful, and transparent, we need more of this. I have many questions, mostly on a meta-level, but the part about AI safety is what I’d preferred to be answered.
About AI safety :
What kind of impact or successes do you expect by hiring these 3 seniors roles in AI safety? Can you detail a bit the impact value expected by the creation of these roles?
Do you think that the AI safety field is talent-constrained at the senior level, but has its fair share of junior positions already filled?
About the ratio of hires between AI safety and biorisks:
Given the high number of positions in biosafety, should we conclude that the field is more talent-constrained than AI safety that seem to need less workforce?
More diverse consideration about GCR
Do you intend to dedicate any of these roles to nuclear risks to help a bit the lack of funding in the field of nuclear risk, or is it rated rather low in your prioritization cause ranking?
About cause-prioritization positions
What kind of projects do you intend to launch/can you be more specific about the topics that will be researched in this area? Also, what kind of background knowledge is needed for such a job?
On technical AI safety, fundamentally having more grantmaking and research capacity (junior or senior) will help us make more grants to great projects that we wouldn’t have been able to otherwise; I wrote about that team’s hiring needs in this separate post. In terms of AI safety more broadly (outside of just my team), I’d say there is a more severe constraint on people who can mentor junior researchers, but the field could use more strong researchers at all levels of seniority.
Hi Vaipan, I’ll take your question about the ratio of hires between AI safety and biosecurity. In short, no, it wouldn’t be correct to conclude that biosecurity is more talent constrained than AI safety. The number of roles is rather a reflection of our teams’ respective needs at the given moment.
And on the “more diverse consideration about GCR” question, note that my team is advertising for a contractor who will look into risks that lie outside biosecurity and AI safety, including nuclear weapons risks. Note though that I expect AI safety and biosecurity to remain more highly prioritized going forward.
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.
Thank you for doing this! Highly helpful, and transparent, we need more of this. I have many questions, mostly on a meta-level, but the part about AI safety is what I’d preferred to be answered.
About AI safety :
What kind of impact or successes do you expect by hiring these 3 seniors roles in AI safety? Can you detail a bit the impact value expected by the creation of these roles?
Do you think that the AI safety field is talent-constrained at the senior level, but has its fair share of junior positions already filled?
About the ratio of hires between AI safety and biorisks:
Given the high number of positions in biosafety, should we conclude that the field is more talent-constrained than AI safety that seem to need less workforce?
More diverse consideration about GCR
Do you intend to dedicate any of these roles to nuclear risks to help a bit the lack of funding in the field of nuclear risk, or is it rated rather low in your prioritization cause ranking?
About cause-prioritization positions
What kind of projects do you intend to launch/can you be more specific about the topics that will be researched in this area? Also, what kind of background knowledge is needed for such a job?
Thank you so much for your answers!
On technical AI safety, fundamentally having more grantmaking and research capacity (junior or senior) will help us make more grants to great projects that we wouldn’t have been able to otherwise; I wrote about that team’s hiring needs in this separate post. In terms of AI safety more broadly (outside of just my team), I’d say there is a more severe constraint on people who can mentor junior researchers, but the field could use more strong researchers at all levels of seniority.
Hi Vaipan, I’ll take your question about the ratio of hires between AI safety and biosecurity. In short, no, it wouldn’t be correct to conclude that biosecurity is more talent constrained than AI safety. The number of roles is rather a reflection of our teams’ respective needs at the given moment.
And on the “more diverse consideration about GCR” question, note that my team is advertising for a contractor who will look into risks that lie outside biosecurity and AI safety, including nuclear weapons risks. Note though that I expect AI safety and biosecurity to remain more highly prioritized going forward.
Hi Vaipan,
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.