Three additional arguments in favor of (marginally!!!!) greater social norm enforcement:
(1)
A movement can only optimize for one thing at a time. EA should be optimizing for doing the most good.
That means sometimes, EA will need to acquiesce to social norms against behaviors that—even if fine in isolation—pose too great a risk of damaging EA’s reputation and through it, EA’s ability to do the most good.
This is trivially true; I think people just disagree about where the line should be drawn. But I’m honestly not sure we’re drawing any lines right now, which seems suboptimal.
(2)
Punishing norm violations can be more efficient than litigating every issue in full (this is in part why humans evolved punishment norms in the first place).
And sometimes, enforcing social norms may not just more efficient; it may be more likely to reach a good outcome. For example, when the benefits of a norm are diffuse across many people and gradual, but the costs are concentrated and immediate, a collective action problem arises: the beneficiaries have little incentive to litigate the issue, while those hurt have a large incentive. Note how this interacts with point (1): reputation damages to EA at large are highly diffuse.
To strengthen this point, social norms often pass down knowledge that benefits adherents without their ever realizing it. Humans aren’t good at getting the best outcomes from our individual reasoning; we’re good at collective learning.
(3)
There are a lot more people in the world interested in norm violation than in doing the most good. Therefore, we should expect that a movement too tolerant of weirdness will create too high a ratio of norm-violators to helpful EAs (this is the witch hunt point made in the OP).
Strong upvote.
Three additional arguments in favor of (marginally!!!!) greater social norm enforcement:
(1)
A movement can only optimize for one thing at a time. EA should be optimizing for doing the most good.
That means sometimes, EA will need to acquiesce to social norms against behaviors that—even if fine in isolation—pose too great a risk of damaging EA’s reputation and through it, EA’s ability to do the most good.
This is trivially true; I think people just disagree about where the line should be drawn. But I’m honestly not sure we’re drawing any lines right now, which seems suboptimal.
(2)
Punishing norm violations can be more efficient than litigating every issue in full (this is in part why humans evolved punishment norms in the first place).
And sometimes, enforcing social norms may not just more efficient; it may be more likely to reach a good outcome. For example, when the benefits of a norm are diffuse across many people and gradual, but the costs are concentrated and immediate, a collective action problem arises: the beneficiaries have little incentive to litigate the issue, while those hurt have a large incentive. Note how this interacts with point (1): reputation damages to EA at large are highly diffuse.
To strengthen this point, social norms often pass down knowledge that benefits adherents without their ever realizing it. Humans aren’t good at getting the best outcomes from our individual reasoning; we’re good at collective learning.
(3)
There are a lot more people in the world interested in norm violation than in doing the most good. Therefore, we should expect that a movement too tolerant of weirdness will create too high a ratio of norm-violators to helpful EAs (this is the witch hunt point made in the OP).