I thought this was a great post, thanks for writing it. Some notes:
If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
Example: People feel free to change their minds ideologically; the only sacred principles are something like âitâs good to do goodâ and âwhen doing good, we should do so effectivelyâ, which people probably wonât disagree with, and which, if they did disagree, should probably make them not-EAs.
If a core value of EA is truth-seeking/âscout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
I feel like, if there wasnât an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course thereâs a lot of overlap) to exist in the same space should if anything attenuate groupthink.
Maybe thereâs evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. Iâm not sure if this is true though.
Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
I agree that it seems potentially unhealthy to have oneâs entire social and professional circle drawn from a single intellectual movement.
Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.
This just doesnât seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what theyâre doing.
I thought this was a great post, thanks for writing it. Some notes:
If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
Example: People feel free to change their minds ideologically; the only sacred principles are something like âitâs good to do goodâ and âwhen doing good, we should do so effectivelyâ, which people probably wonât disagree with, and which, if they did disagree, should probably make them not-EAs.
If a core value of EA is truth-seeking/âscout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
I feel like, if there wasnât an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course thereâs a lot of overlap) to exist in the same space should if anything attenuate groupthink.
Maybe thereâs evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. Iâm not sure if this is true though.
Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
I agree that it seems potentially unhealthy to have oneâs entire social and professional circle drawn from a single intellectual movement.
This just doesnât seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what theyâre doing.