Thanks, I think this is interesting and these sorts of considerations may become increasingly important as EA grows. One other strategy that I think is worth pursuing is preventative measures. IMHO, ideally EA would be the kind of community that selectively repels people likely to be malicious (eg I think it’s good if we repel people who are generally fueled by anger, people who are particularly loud and annoying, people who are racist, etc). I think we already do a pretty good job of “smacking down” people who are very brash or insulting to other members, and I think the epistemic norms in the community probably also select somewhat against people who are particularly angry or who have a tendency to engage in ad hominem. Might also be worth considering what other traits we want to select for/against, and what sort of norms we could adopt towards those ends.
Thanks, I think this is interesting and these sorts of considerations may become increasingly important as EA grows. One other strategy that I think is worth pursuing is preventative measures. IMHO, ideally EA would be the kind of community that selectively repels people likely to be malicious (eg I think it’s good if we repel people who are generally fueled by anger, people who are particularly loud and annoying, people who are racist, etc). I think we already do a pretty good job of “smacking down” people who are very brash or insulting to other members, and I think the epistemic norms in the community probably also select somewhat against people who are particularly angry or who have a tendency to engage in ad hominem. Might also be worth considering what other traits we want to select for/against, and what sort of norms we could adopt towards those ends.
Agreed on preventative measures, where possible. I imagine preventative measures are probably more cost-effective than measures after the fact.