For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it’s plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it’s plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.