pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem
While there is significant support for “AI as cause area #1”, I know plenty of EAs that do not agree with this. Therefore, “generally agree upon” feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety
While there is significant support for “AI as cause area #1”, I know plenty of EAs that do not agree with this. Therefore, “generally agree upon” feels like a bit too strong of a wording to me. See also my post on why EAs are skeptical about AI safety
For viewpoints from professional AI researchers, see Vael Gates interviews with AI researchers on AGI risk.
I mention those pieces not to argue that AI risk is overblown, but rather to shed more light on your question.
Thanks for linking these posts, it’s useful to see a different perspective to the one I feel gets exposed the most.
Not only is Lukas right to point out that many EAs are skeptical of AI risk, but it isn’t even the top priority as selected by EAs, Global Poverty continues to be: https://rethinkpriorities.org/publications/eas2020-cause-prioritization