I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
“On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I’m worried about the second-order effects of talking about this misconduct.”
I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.
More generally I am opposed to “Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X.”
Another take here is that if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them. I don’t think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like “it’s more interesting to hear arguments you haven’t heard before).)
EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.
Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for ‘X’ here, and as a result can totally understand where Khorton’s response is coming from (although I think ‘campaigning against racism’ is also a mischaracterisation of X here).
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
Why is it important to not throw out nuance here? Because of Will’s original comment: there are downsides to being very critical, especially publicly, where we might cause more split or be unwelcoming. I agree with you that we shouldn’t be trying to appeal to everyone or take a balanced position on every issue, but I don’t think we should ignore the importance of creating a culture that is welcoming to all either. These things do not in principle need to be traded-off against each other, we can have both (if we are skillful).
Despite you saying that you agree with the content of Will’s comment, I think you didn’t fully grok Will’s initial concern, because when you say:
”if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them”
More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying.
[...]
If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Biasyears before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
I think you and Khorton are misinterpreting the analogy. Buck focused on a practice that is unequivocally bad precisely so that he can establish, to the satisfaction of everyone involved in this discussion, that Will’s reasoning applies only up to a point: if a practice is judged to be sufficiently harmful, it seems appropriate to have lots of posts condemning it, even if this has some undesirable side effects. Then the question becomes: how should those who regard “cancel culture” as very harmful indeed respond, given that others do not at all share this assessment, and that continuing to write about this topic risks causing a split in the community to which both groups of people belong?
(I enclose ‘cancel culture’ in scare quotes because I am hesitant to use a term that some object to as having inappropriate connotations. It would be nice to find an expression for the phenomenon in question which we are all happy to use.)
Sure, I do appreciate the point that Buck is bringing. I agree with it in fact (as the first part of my first sentence said). I just additionally found the particular X he substituted not a good one for separate reasons to the main point he was making. I also think the real disagreement with Buck and myself is getting closer to it on a sister branch.
I do think your question is good here, and decomposes the discussion into two disagreements: 1) was this an instance of ‘cancel culture’, if so how bad is it? 2) what is the risk of writing about this kind of thing (causing splits) vs. the risk of not?
On 1) I feel, like Neel below, that moving charities ratings for an evaluator is a serious thing which requires a high bar of scrutiny, whereas the other two concerns outlined (blogpost and conference) seem far more minor. I think the OP would be far better if only focused on that and evidence for/against.
On 2) I think this is a discussion worth having, and that the answer is not 0 risk for any side.
EDIT to add: sorry I think I didn’t respond properly/clearly enough to your main point. I get that Buck is conditioning on 1) above, and saying if we agree it’s really bad, then what. I just think that he was not very explicit about that. If Buck had said something like, ‘I want to pick up on a minor point, and to do this will need to condition on the world where we come to the conclusion that ACE did something unequivocally bad here...’ at the beginning, I don’t think the first part of my objections would have applied so much. EDIT to add: Although I still think he should have chosen a different bad thing X.
(I’m writing these comments kind of quickly, sorry for sloppiness.)
With regard to
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.
I would have no meta-level objection to a comment saying “I disagree that X is bad, I think it’s actually fine”.
I think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you’ve responded to our main disagreement though, so I’ll respond on that branch.
and actively campaigning against racism has nothing in common with sexual harassment.
Universal statements like this strike me as almost always wrong. Of course there are many similarities that seem relevant here, and a simple assertion that they are not doesn’t seem to help the discussion.
I would really quite strongly prefer to not have comments like this on the forum, so I downvoted it. I would have usually just left it at the downvote, but i think Khorton has in the past expressed a preference for having downvotes explained, so I opted on the side of transparency.
While I didn’t like Khorton’s original comment, this comment comes across as spiteful and mean, while contributing little or nothing of value. I strong-downvoted it.
I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
“On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I’m worried about the second-order effects of talking about this misconduct.”
I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.
More generally I am opposed to “Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X.”
Another take here is that if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them. I don’t think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like “it’s more interesting to hear arguments you haven’t heard before).)
EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.
Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for ‘X’ here, and as a result can totally understand where Khorton’s response is coming from (although I think ‘campaigning against racism’ is also a mischaracterisation of X here).
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
Why is it important to not throw out nuance here? Because of Will’s original comment: there are downsides to being very critical, especially publicly, where we might cause more split or be unwelcoming. I agree with you that we shouldn’t be trying to appeal to everyone or take a balanced position on every issue, but I don’t think we should ignore the importance of creating a culture that is welcoming to all either. These things do not in principle need to be traded-off against each other, we can have both (if we are skillful).
Despite you saying that you agree with the content of Will’s comment, I think you didn’t fully grok Will’s initial concern, because when you say:
”if a group of people are sad that their views aren’t sufficiently represented on the EA forum, they should consider making better arguments for them”
you are doing the thing (being unwelcoming)
More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I bounce off posts like this. Not sure if you’d consider me net positive or not. :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
I am glad to have you around, of course.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Bias years before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
I appreciate you trying to find our true disagreement here.
I think you and Khorton are misinterpreting the analogy. Buck focused on a practice that is unequivocally bad precisely so that he can establish, to the satisfaction of everyone involved in this discussion, that Will’s reasoning applies only up to a point: if a practice is judged to be sufficiently harmful, it seems appropriate to have lots of posts condemning it, even if this has some undesirable side effects. Then the question becomes: how should those who regard “cancel culture” as very harmful indeed respond, given that others do not at all share this assessment, and that continuing to write about this topic risks causing a split in the community to which both groups of people belong?
(I enclose ‘cancel culture’ in scare quotes because I am hesitant to use a term that some object to as having inappropriate connotations. It would be nice to find an expression for the phenomenon in question which we are all happy to use.)
Sure, I do appreciate the point that Buck is bringing. I agree with it in fact (as the first part of my first sentence said). I just additionally found the particular X he substituted not a good one for separate reasons to the main point he was making. I also think the real disagreement with Buck and myself is getting closer to it on a sister branch.
I do think your question is good here, and decomposes the discussion into two disagreements:
1) was this an instance of ‘cancel culture’, if so how bad is it?
2) what is the risk of writing about this kind of thing (causing splits) vs. the risk of not?
On 1) I feel, like Neel below, that moving charities ratings for an evaluator is a serious thing which requires a high bar of scrutiny, whereas the other two concerns outlined (blogpost and conference) seem far more minor. I think the OP would be far better if only focused on that and evidence for/against.
On 2) I think this is a discussion worth having, and that the answer is not 0 risk for any side.
EDIT to add: sorry I think I didn’t respond properly/clearly enough to your main point. I get that Buck is conditioning on 1) above, and saying if we agree it’s really bad, then what. I just think that he was not very explicit about that. If Buck had said something like, ‘I want to pick up on a minor point, and to do this will need to condition on the world where we come to the conclusion that ACE did something unequivocally bad here...’ at the beginning, I don’t think the first part of my objections would have applied so much. EDIT to add: Although I still think he should have chosen a different bad thing X.
(I’m writing these comments kind of quickly, sorry for sloppiness.)
With regard to
In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.
I would have no meta-level objection to a comment saying “I disagree that X is bad, I think it’s actually fine”.
I think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you’ve responded to our main disagreement though, so I’ll respond on that branch.
No one is enthusiastic about sexual harassment, and actively campaigning against racism has nothing in common with sexual harassment.
Universal statements like this strike me as almost always wrong. Of course there are many similarities that seem relevant here, and a simple assertion that they are not doesn’t seem to help the discussion.
I would really quite strongly prefer to not have comments like this on the forum, so I downvoted it. I would have usually just left it at the downvote, but i think Khorton has in the past expressed a preference for having downvotes explained, so I opted on the side of transparency.
I appreciate the self-consistency of this sentence :)
Look who’s never heard of intersectionality
While I didn’t like Khorton’s original comment, this comment comes across as spiteful and mean, while contributing little or nothing of value. I strong-downvoted it.
Seems like others agreed with you. I meant it mostly seriously.