Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.
Hi Vaipan,
Thanks for your questions. I’ll address the last one, on behalf of the cause prio team.
One of the exciting things about this team is that, because it launched so recently, there’s a lot of room to try new things as we explore different ways to be useful. To name a few examples:
We’re working on a constellation of projects that will help us compare our grantmaking focused on risks from advanced AI systems to our grantmaking focused on improving biosecurity and pandemic preparedness.
We’re producing a slew of new BOTECs across different focus areas. If it goes well, this exercise will help us be more quantitative when evaluating and comparing future grantmaking opportunities.
As you can imagine, the result of a given BOTEC depends heavily on the worldview assumptions you plug in. There isn’t an Open Phil house view on key issues like AI timelines or p(doom). One thing the cause prio team might do is periodically survey senior GCR leaders on important questions so we better understand the distribution of answers.
We’re also doing a bunch of work that is aimed at increasing strategic clarity. For instance, we’re thinking a lot about next-generation AI models: how to forecast their capabilities, what dangers those capabilities might imply, how to communicate those dangers to labs and policymakers, and ultimately how to design evals to assess risk levels.
There is no particular background knowledge that is required for a role on our team. For context, a majority of current team members were working on global health and wellbeing issues less than a year ago. For this hiring round, applicants that understand the GCR ecosystem and have at least superficial understanding of frontier AI models will in general do better than applicants that lack that understanding. But I encourage everyone who is interested to apply.