As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you’re coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.
You could imagine two communities whose members in practice work on very similar things, but whose culture couldn’t be further apart:
Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynamics that discourage certain beliefs and questions, including about said social dynamics. Comes across as groupthinky/culty to anyone that isn’t immediately on-board.
Epistemically humble community that tries to figure out what the most impactful projects are to improve the world, a large fraction of which have tentatively come to the conclusion that AI safety appears very pressing and have subsequently decided to work on this cause area. People are self-aware of the tower of assumptions underlying this conclusion. Social dynamics of the group can be openly discussed. Comes across as truth-seeking.
I think it’s possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.
As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you’re coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.
You could imagine two communities whose members in practice work on very similar things, but whose culture couldn’t be further apart:
Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynamics that discourage certain beliefs and questions, including about said social dynamics. Comes across as groupthinky/culty to anyone that isn’t immediately on-board.
Epistemically humble community that tries to figure out what the most impactful projects are to improve the world, a large fraction of which have tentatively come to the conclusion that AI safety appears very pressing and have subsequently decided to work on this cause area. People are self-aware of the tower of assumptions underlying this conclusion. Social dynamics of the group can be openly discussed. Comes across as truth-seeking.
I think it’s possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.