As someone who is a deep learning researcher and came to believe in the importance of AI safety through EA, I would like to say I strongly agree with the last point on making allies and growing the AI safety field. I support the claim that some people feel more hesitant to be involved in AI safety or just give up as there is a somewhat cliquey and dismissive feeling from the community and the community sometimes feels quite fragmented on arguments for and against what’s useful. To me, this feels a bit counterproductive and alienating.
I hypothesize that frowning on, or even just the large focus on questioning the usefulness of near-term safety work adds to the deterrence of other current deep learning researchers and maybe other communities too engaging with AI safety. Less parochialism and more friends seem like a sensible approach and a more productive community.
As someone who is a deep learning researcher and came to believe in the importance of AI safety through EA, I would like to say I strongly agree with the last point on making allies and growing the AI safety field. I support the claim that some people feel more hesitant to be involved in AI safety or just give up as there is a somewhat cliquey and dismissive feeling from the community and the community sometimes feels quite fragmented on arguments for and against what’s useful. To me, this feels a bit counterproductive and alienating.
I hypothesize that frowning on, or even just the large focus on questioning the usefulness of near-term safety work adds to the deterrence of other current deep learning researchers and maybe other communities too engaging with AI safety. Less parochialism and more friends seem like a sensible approach and a more productive community.