I think this type of thinking and work is useful and important.
It’s very surprising (although in some ways not surprising) that this analysis hasn’t been done before elsewhere.
Have you searched for previous work on analyzing the AI safety space?
If OpenPhil (and others) are in the process of funding billions of dollars of AI safety work and field building, wouldn’t they themselves do some sort of comprehensive analysis or fund / lean on someone else to do it?
Based on limited desktop research and two 1:1s with people from BlueDot Impact and GovAI, the types of existing analysis are fragmented, not conducted as part a holistic systems based approach. (I could be wrong)
Perhaps an improved version of my research would include a complete literature review of such findings, to not only qualify my claim (and that of others I’ve spoken to) that we lack a holistic approach for both understanding and building the field, but use existing efforts as a starting points (which I hint to in Application Step 1).
As for Open Phil, your comment spurred me to ask them this question in their most recent grant announcement post!
Happy for you to signpost me to other orgs/specific individuals. I’m keen to turn my research into action.
I think this type of thinking and work is useful and important.
It’s very surprising (although in some ways not surprising) that this analysis hasn’t been done before elsewhere.
Have you searched for previous work on analyzing the AI safety space?
If OpenPhil (and others) are in the process of funding billions of dollars of AI safety work and field building, wouldn’t they themselves do some sort of comprehensive analysis or fund / lean on someone else to do it?
Thanks Matt.
Based on limited desktop research and two 1:1s with people from BlueDot Impact and GovAI, the types of existing analysis are fragmented, not conducted as part a holistic systems based approach. (I could be wrong)
Examples: What Should AI be Trying to Achieve identifies possible research directions based on interviews with AI safety experts; A Brief Overview of AI Safety Alignment Orgs identifies actor groupings and specific focus areas; the AI Safety Landscape map provides a visual of actor groupings and functions.
Perhaps an improved version of my research would include a complete literature review of such findings, to not only qualify my claim (and that of others I’ve spoken to) that we lack a holistic approach for both understanding and building the field, but use existing efforts as a starting points (which I hint to in Application Step 1).
As for Open Phil, your comment spurred me to ask them this question in their most recent grant announcement post!
Happy for you to signpost me to other orgs/specific individuals. I’m keen to turn my research into action.