I thought this was a great post, thanks for writing it. Some notes:
If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
Example: People feel free to change their minds ideologically; the only sacred principles are something like “it’s good to do good” and “when doing good, we should do so effectively”, which people probably won’t disagree with, and which, if they did disagree, should probably make them not-EAs.
If a core value of EA is truth-seeking/scout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
I feel like, if there wasn’t an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course there’s a lot of overlap) to exist in the same space should if anything attenuate groupthink.
Maybe there’s evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. I’m not sure if this is true though.
Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
I agree that it seems potentially unhealthy to have one’s entire social and professional circle drawn from a single intellectual movement.
Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.
This just doesn’t seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what they’re doing.
I thought this was a great post, thanks for writing it. Some notes:
If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
Example: People feel free to change their minds ideologically; the only sacred principles are something like “it’s good to do good” and “when doing good, we should do so effectively”, which people probably won’t disagree with, and which, if they did disagree, should probably make them not-EAs.
If a core value of EA is truth-seeking/scout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
I feel like, if there wasn’t an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course there’s a lot of overlap) to exist in the same space should if anything attenuate groupthink.
Maybe there’s evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. I’m not sure if this is true though.
Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
I agree that it seems potentially unhealthy to have one’s entire social and professional circle drawn from a single intellectual movement.
This just doesn’t seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what they’re doing.