I would note that Moral Ambition did mention catastrophic risk, specifically mentioning risks from Artificial General Intelligence, as a potentially promising area for morally ambitious people to make an impact.
Also, work on systemic change is consistent with core EA principles (doing the most good with the resources we can). Some areas could be a strong speculative bet, similar to the reasoning supporting some projects associated with longtermism.
I think there’s a very high degree of complementarity and compatibility with core EA philosophy, even if actual SMA conclusions in terms of cause areas differ in some ways from the cause areas EA tends to focus on. I think, however, core EA philosophy is about the fundamental principles, not the downstream cause areas, and if different people’s epistemologies proceeding from those principles lead them different places than where the current EA community is, I don’t think they are any less EAs.
Thanks for your thoughts on this.
I would note that Moral Ambition did mention catastrophic risk, specifically mentioning risks from Artificial General Intelligence, as a potentially promising area for morally ambitious people to make an impact.
Also, work on systemic change is consistent with core EA principles (doing the most good with the resources we can). Some areas could be a strong speculative bet, similar to the reasoning supporting some projects associated with longtermism.
I think there’s a very high degree of complementarity and compatibility with core EA philosophy, even if actual SMA conclusions in terms of cause areas differ in some ways from the cause areas EA tends to focus on. I think, however, core EA philosophy is about the fundamental principles, not the downstream cause areas, and if different people’s epistemologies proceeding from those principles lead them different places than where the current EA community is, I don’t think they are any less EAs.