I tend to think diversification in EA is important even if we think there’s a high chance of AGI by 2040. Working on other issues gives us better engagement with policy makers and the public, improves the credibility of the movement, and provides more opportunities to get feedback on what does or doesn’t work for maximizing impact. Becoming insular or obsessive about AI would be alienating to many potential allies and make it harder to support good epistemic norms. And there are other causes where we can have a positive effect without directly competing for resources, because not all participants and funders are willing or able to work on AI.
I tend to think diversification in EA is important even if we think there’s a high chance of AGI by 2040. Working on other issues gives us better engagement with policy makers and the public, improves the credibility of the movement, and provides more opportunities to get feedback on what does or doesn’t work for maximizing impact. Becoming insular or obsessive about AI would be alienating to many potential allies and make it harder to support good epistemic norms. And there are other causes where we can have a positive effect without directly competing for resources, because not all participants and funders are willing or able to work on AI.