Someone wrote this to me privately. I agree with the substance of the criticism and has since edited the post accordingly.
> As a general strategy, it seems much better for most people in the community to [...] quickly disavow any associations that could be seen as potentially problematic.
This part seems objectionable to me even if I agreed with the rest of your post.
1. Public disavowal can increase the chance that the accused person will suffer unjust bad outcomes. This starts to slide away from ‘don’t protect your peers from being burned as witches’ to ‘preemptively help burn your peers as witches so you don’t get burned yourself’. This seems like bad game theory to me, so I question its wisdom from a selfish perspective. And it seems especially bad altruistically, if the people you’re helping burn are fellow EAs.
2. If you don’t think the accused person is actually a witch, then helping burn or condemn them also seems like a violation of some of the important ethical principles that helps people coordinate.
If I expect my peers to lie or stab me in the back as soon as this seems useful to them, then I’ll be a lot less willing and able to work with them. This can lead to a bad feedback loop, where EAs distrust each other more and more as they become more willing to betray each other.
Highly knowledgeable and principled people will tend to be more attracted to groups that show honesty, courage, and integrity. There are a lot of contracts and cooperative arrangements that are possible between people who have different goals, but some level of trust. Losing that baseline level of trust can be extremely costly and cause mutually beneficial trades to be replaced by exploitative or mutually destructive dynamics.
Camaraderie gets things done. If you can create a group where people expect to have each other’s back, and expect to be defended if someone lies about them, then I think that makes the group much more attractive to belong to, and helps with important things like internal cooperation.
But even absent camaraderie, basic norms of civil discourse get an awful lot done too. Norms like ‘we won’t help make things worse for you and spread misinformation about you if someone is unethically targeting you’ get you a lot, even if you lose the valuable norm ‘we’ll defend you if someone is unethically targeting you’.
3. Another problem with denouncing people who you don’t think deserve denunciation is that it puts you on the record about any person, group, or idea anyone ever wants to make you publicly weigh in on. If you refused to participate in the witch-hunting as a matter of principle, then this might lose you some reputational capital in the near term, but in the long term it would make it harder for people to infer ‘oh, so you do endorse this other thing’ from your decision not to disavow something later.
One way of thinking about the free speech meme, ‘though I disagree with what someone says, I’ll defend to the death their right to say it’, is that it’s functioning as exactly this kind of game-theoretic strategy right now. On this way of doing things, people get to avoid condemning each other as witches in academia—in fact, they even get to actively work to help and protect each other, in cases where they think the accusations are unjust, harmful, or false—all without ever endorsing or disavowing any of the actual positions under discussion.
To the extent this works, it works because a large group of people has agreed to an explicit strategy of protecting people even when they disagree with or dislike them. This lets you protect the falsely accused (or at least avoid accusing them of witchcraft yourself) without going on public record about every accusation that’s currently blowing up on social media.
This strategy doesn’t make them immune to cancelation, but especially so long as the strategy is widespread, it provides massively more leeway and protection against discourse evolving toward Red Scare dynamics.
“As a general strategy, it seems much better for most people in the community to watch what they say in public somewhat, be careful with their public associations, and minimize public contact with any associations that could be seen as potentially problematic.”
More broadly, I think the thing I’m most worried about is altruistic nerds not thinking about the second order considerations at all, rather than any object level suggestions.
Someone wrote this to me privately. I agree with the substance of the criticism and has since edited the post accordingly.
The revised statement is:
More broadly, I think the thing I’m most worried about is altruistic nerds not thinking about the second order considerations at all, rather than any object level suggestions.