Stephen—thanks for a very helpful and well-researched overview of the funding situation. It seems pretty comprehensive, and will be a useful resource for people considering AI safety research.
I know there’s been a schism between the ‘AI Safety’ field (focused on reducing X risk) and the ‘AI ethics’ field (focused on reducing prejudice, discrimination, ‘misinformation’, etc.) But I can imagine some AI ethics research (e.g. on mass unemployment, or lethal autonomous weapon systems, or political deepfakes, or AI bot manipulation of social media) that could feed into AI safety issues, e.g. by addressing developments that could increase the risks of social instability, political assassination, partisan secession, great-power conflict, which could lead to increased X risk.
I imagine it would be much harder to analyze the talent and money devoted to those kinds of issues, and to disentangling them from other kinds of AI ethics research. But I’d be curious whether anyone else has any sense of what proportion of AI ethics work could actually inform our understanding of X risk and X risk amplifiers....
I think work on near-term issues like unemployment, bias, fairness and misinformation is highly valuable and the book The Alignment Problem does a good job of describing a variety of these kinds of risks. However, since these issues are generally more visible and near-term, I expect them to be relatively less neglected than long-term risks such as existential risk. The other factor is importance or impact. I believe the possibility of existential risk greatly outweighs the importance of other possible effects of AI though this view is partially conditional on believing in longtermism and weighting the value of the long-term trajectory of humanity highly.
I do think AI ethics is really important and one kind of research I find interesting is research on what Nick Bostrom calls the value loading problem which is the question of what kind of philosophical framework future AIs should follow. This seems like a crucial problem that will need to be solved eventually. Though my guess is that most AI ethics research is more focused on nearer-term problems.
Gavin Leech wrote an EA Forum post which I recommend named The academic contribution to AI safety seems large where he argues that the contribution of academia to AI safety is large even with a strong discount factor because academia does a lot of research on AI safety-adjacent topics such as transparency, bias and robustness.
I have included some sections on academia in this post though I’ve mostly focused on EA funds because I’m more confident that they are doing work that is highly important and neglected.
Stephen—thanks for a very helpful and well-researched overview of the funding situation. It seems pretty comprehensive, and will be a useful resource for people considering AI safety research.
I know there’s been a schism between the ‘AI Safety’ field (focused on reducing X risk) and the ‘AI ethics’ field (focused on reducing prejudice, discrimination, ‘misinformation’, etc.) But I can imagine some AI ethics research (e.g. on mass unemployment, or lethal autonomous weapon systems, or political deepfakes, or AI bot manipulation of social media) that could feed into AI safety issues, e.g. by addressing developments that could increase the risks of social instability, political assassination, partisan secession, great-power conflict, which could lead to increased X risk.
I imagine it would be much harder to analyze the talent and money devoted to those kinds of issues, and to disentangling them from other kinds of AI ethics research. But I’d be curious whether anyone else has any sense of what proportion of AI ethics work could actually inform our understanding of X risk and X risk amplifiers....
I think work on near-term issues like unemployment, bias, fairness and misinformation is highly valuable and the book The Alignment Problem does a good job of describing a variety of these kinds of risks. However, since these issues are generally more visible and near-term, I expect them to be relatively less neglected than long-term risks such as existential risk. The other factor is importance or impact. I believe the possibility of existential risk greatly outweighs the importance of other possible effects of AI though this view is partially conditional on believing in longtermism and weighting the value of the long-term trajectory of humanity highly.
I do think AI ethics is really important and one kind of research I find interesting is research on what Nick Bostrom calls the value loading problem which is the question of what kind of philosophical framework future AIs should follow. This seems like a crucial problem that will need to be solved eventually. Though my guess is that most AI ethics research is more focused on nearer-term problems.
Gavin Leech wrote an EA Forum post which I recommend named The academic contribution to AI safety seems large where he argues that the contribution of academia to AI safety is large even with a strong discount factor because academia does a lot of research on AI safety-adjacent topics such as transparency, bias and robustness.
I have included some sections on academia in this post though I’ve mostly focused on EA funds because I’m more confident that they are doing work that is highly important and neglected.