Your insights have been incredibly valuable. I’d like to share a few thoughts that might offer a balanced perspective going forward.
It’s worth noting the need to approach the call for increased funding critically. While animal welfare and global health organizations might express similar needs, the current emphasis on AI risks often takes center stage. There’s a clear desire for more support within these organizations, but it’s important for OpenPhil and private donors to assess these requests thoughtfully to ensure their alignment with genuine justification.
The observation that AI safety professionals anticipate more attention within Effective Altruism for AI safety compared to AI governance confirms a suspicion I’ve had. There seems to be a tendency among AI safety experts to prioritize their field above others, urging a redirection of resources solely to AI safety. It’s crucial to maintain a cautious approach to such suggestions. Given the current landscape in AI safety—characterized by disagreements among professionals and limited demonstrable impact—pursuing such a high-risk strategy might not be the most prudent choice.
In discussions with AI safety experts about the potential for minimal progress despite significant investment in the wrong direction over five years, their perspective often revolves around the need to explore diverse approaches. However, this approach seems to diverge considerably from the principles embraced within Effective Altruism. I can understand why a community builder might feel uneasy about a strategy that, after five years of intense investment, offers little tangible progress and potentially detracts from other pressing causes.
Your insights have been incredibly valuable. I’d like to share a few thoughts that might offer a balanced perspective going forward.
It’s worth noting the need to approach the call for increased funding critically. While animal welfare and global health organizations might express similar needs, the current emphasis on AI risks often takes center stage. There’s a clear desire for more support within these organizations, but it’s important for OpenPhil and private donors to assess these requests thoughtfully to ensure their alignment with genuine justification.
The observation that AI safety professionals anticipate more attention within Effective Altruism for AI safety compared to AI governance confirms a suspicion I’ve had. There seems to be a tendency among AI safety experts to prioritize their field above others, urging a redirection of resources solely to AI safety. It’s crucial to maintain a cautious approach to such suggestions. Given the current landscape in AI safety—characterized by disagreements among professionals and limited demonstrable impact—pursuing such a high-risk strategy might not be the most prudent choice.
In discussions with AI safety experts about the potential for minimal progress despite significant investment in the wrong direction over five years, their perspective often revolves around the need to explore diverse approaches. However, this approach seems to diverge considerably from the principles embraced within Effective Altruism. I can understand why a community builder might feel uneasy about a strategy that, after five years of intense investment, offers little tangible progress and potentially detracts from other pressing causes.