I think the EA community (and rationality community) is systematically too much at risk of being too charitable. I don’t have a citation for that but my impression is very much that this has been pointed out repeatedly in the instances where there was community discussion on problematic behavior of people who seemed interpersonally incorrigible. I think it’s really unwise and has bad consequences to continue repeating that mistake.
While I mostly agree with you in general (e.g. Gleb Tsipursky getting too many second chances), I’m not quite sure what you’re trying to say in this case.
Do you think that the moderators were too charitable toward Phil? He was banned from the Forum for a year, and we tried to make it clear that his comments were rude and unacceptable. Before that thread, his comments were generally unremarkable, with the exception of one bitter exchange of the type that happens once in a while for many different users. And I’m loathe to issue Forum-based consequences for someone’s interpersonal behavior outside the Forum unless it’s a truly exceptional circumstance.
*****
To the extent that someone’s problematic interpersonal behavior is being discussed on the Forum, I still believe we should try to actually show evidence. Many Forum readers are new to the community, or otherwise aren’t privy to drama within the field of longtermist research. If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence. (Though as I said in my reply to Halstead’s reply, his comment was still clearly valuable overall.)
Imagine showing a random person from outside the EA community* (say, someone familiar with Twitter) this comment and this comment, as well as the karma scores. That person might conclude “Halstead was right and Phil was wrong”. They might also conclude “Halstead is a popular member of the ingroup and Phil is getting cancelled for wrongthink”.
To many of us inside the community, it’s obvious that the first conclusion is more accurate. But the second thing happens all the time, and a good way to prove that we’re not in the “cancelled for wrongthink” universe is to have a strong norm that negative claims come with evidence.
*This isn’t to say that all moderation should necessarily pass the “would make sense to a random Twitter user” test. But I think it’s a useful test to run in this case.
Do you think that the moderators were too charitable toward Phil?
No, I didn’t mean to voice an opinion on that part. (And the moderation decision seemed reasonable to me.)
My comment was prompted by the concern that giving a warning to Halstead (for not providing more evidence) risks making it difficult for people to voice concerns in the future. My impression is that it’s already difficult enough to voice negative opinions on others’ character. Specifically, I think there’s an effect where, if you voice a negative opinion and aren’t extremely skilled at playing the game of being highly balanced, polite and charitable (e.g., some other people’s comments in the discussion strike me as almost superhumanly balanced and considerate), you’ll offend the parts of the EA forum audience that implicitly consider being charitable to the accused a much more fundamental virtue than protecting other individuals (the potential victims of bad behavior) and the community at large (problematic individuals in my view tend to create a “distortion field” around them that can have negative norm-eroding consequences in various ways – though that was probably much more the case with other community drama than here, given that Phil wrote articles mostly at the periphery of the community.)
Of course, these potential drawbacks I mention only count in worlds where the concerns raised are in fact accurate. The only way to get to the bottom of things is indeed with truth-tracking norms, and being charitable (edit: and thorough) is important for that.
I just feel that the demands for evidence shouldn’t be too strong or absolute, partly also because there are instances where it’s difficult to verbalize why exactly someone’s behavior seems unacceptable (even though it may be really obvious to people who are closely familiar with the situation that it is).
Lastly, I think it’s particularly bad to disincentivize people for how they framed things in instances where they turned out to be right. (It’s different if there was a lot of uncertainty as to whether Halstead had valid concerns, or whether he was just pursuing a personal vendetta against someone.)
Of course, these situations are really, really tricky, and I don’t envy the forum moderators for having to navigate the waters.
If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence.
True, but that also means that the right incentives are already there. If someone doesn’t provide the evidence, it could be that they find that it’s hard to articulate, that there are privacy concerns, or that the person doesn’t have the mental energy at the time to polish their evidence and reasoning, but feels strongly enough that they’d like to speak up with a shorter comment. Issuing a warning discourages all those options. All else equal, providing clear evidence is certainly best. But I wouldn’t want to risk missing out on the relevant info that community veterans (whose reputation is automatically on the line when they voice a strong concern) have a negative opinion for one reason or another.
I think the EA community (and rationality community) is systematically too much at risk of being too charitable. I don’t have a citation for that but my impression is very much that this has been pointed out repeatedly in the instances where there was community discussion on problematic behavior of people who seemed interpersonally incorrigible. I think it’s really unwise and has bad consequences to continue repeating that mistake.
While I mostly agree with you in general (e.g. Gleb Tsipursky getting too many second chances), I’m not quite sure what you’re trying to say in this case.
Do you think that the moderators were too charitable toward Phil? He was banned from the Forum for a year, and we tried to make it clear that his comments were rude and unacceptable. Before that thread, his comments were generally unremarkable, with the exception of one bitter exchange of the type that happens once in a while for many different users. And I’m loathe to issue Forum-based consequences for someone’s interpersonal behavior outside the Forum unless it’s a truly exceptional circumstance.
*****
To the extent that someone’s problematic interpersonal behavior is being discussed on the Forum, I still believe we should try to actually show evidence. Many Forum readers are new to the community, or otherwise aren’t privy to drama within the field of longtermist research. If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence. (Though as I said in my reply to Halstead’s reply, his comment was still clearly valuable overall.)
Imagine showing a random person from outside the EA community* (say, someone familiar with Twitter) this comment and this comment, as well as the karma scores. That person might conclude “Halstead was right and Phil was wrong”. They might also conclude “Halstead is a popular member of the ingroup and Phil is getting cancelled for wrongthink”.
To many of us inside the community, it’s obvious that the first conclusion is more accurate. But the second thing happens all the time, and a good way to prove that we’re not in the “cancelled for wrongthink” universe is to have a strong norm that negative claims come with evidence.
*This isn’t to say that all moderation should necessarily pass the “would make sense to a random Twitter user” test. But I think it’s a useful test to run in this case.
No, I didn’t mean to voice an opinion on that part. (And the moderation decision seemed reasonable to me.)
My comment was prompted by the concern that giving a warning to Halstead (for not providing more evidence) risks making it difficult for people to voice concerns in the future. My impression is that it’s already difficult enough to voice negative opinions on others’ character. Specifically, I think there’s an effect where, if you voice a negative opinion and aren’t extremely skilled at playing the game of being highly balanced, polite and charitable (e.g., some other people’s comments in the discussion strike me as almost superhumanly balanced and considerate), you’ll offend the parts of the EA forum audience that implicitly consider being charitable to the accused a much more fundamental virtue than protecting other individuals (the potential victims of bad behavior) and the community at large (problematic individuals in my view tend to create a “distortion field” around them that can have negative norm-eroding consequences in various ways – though that was probably much more the case with other community drama than here, given that Phil wrote articles mostly at the periphery of the community.)
Of course, these potential drawbacks I mention only count in worlds where the concerns raised are in fact accurate. The only way to get to the bottom of things is indeed with truth-tracking norms, and being charitable (edit: and thorough) is important for that.
I just feel that the demands for evidence shouldn’t be too strong or absolute, partly also because there are instances where it’s difficult to verbalize why exactly someone’s behavior seems unacceptable (even though it may be really obvious to people who are closely familiar with the situation that it is).
Lastly, I think it’s particularly bad to disincentivize people for how they framed things in instances where they turned out to be right. (It’s different if there was a lot of uncertainty as to whether Halstead had valid concerns, or whether he was just pursuing a personal vendetta against someone.)
Of course, these situations are really, really tricky, and I don’t envy the forum moderators for having to navigate the waters.
True, but that also means that the right incentives are already there. If someone doesn’t provide the evidence, it could be that they find that it’s hard to articulate, that there are privacy concerns, or that the person doesn’t have the mental energy at the time to polish their evidence and reasoning, but feels strongly enough that they’d like to speak up with a shorter comment. Issuing a warning discourages all those options. All else equal, providing clear evidence is certainly best. But I wouldn’t want to risk missing out on the relevant info that community veterans (whose reputation is automatically on the line when they voice a strong concern) have a negative opinion for one reason or another.