More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying.
[...]
If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Biasyears before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
More generally, I think our disagreement here probably comes down to something like this:
There’s a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we’re skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I’d have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it’s much harder for these EAs to explain their thoughts on things.
And so I think it’s noticeably costly to criticise people for not being more careful and tactful. It’s worth it in some cases, but we should remember that it’s costly when we’re considering pushing people to be more careful and tactful.
I personally think that “you shouldn’t write criticisms of an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations” is too far in the “restrict people’s ability to say true things, for the sake of making people feel welcome”.
(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)
I don’t disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).
I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?
I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That’s ok if it’s not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will’s comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn’t fully grok it nevertheless).
I am not sure whether I think it’s a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren’t obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they’re infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)
But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it’s pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it’s worth putting some effort into not mocking religions or political views.
In cases like these, I mostly agree with “you need to figure out the exchange rate between welcomingness and unfiltered conversations”.
I guess I expect the net result of Will’s comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn’t have made his other top level comment.
There’s a difference between understanding a consideration and thinking that it’s the dominant consideration in a particular situation :)
I bounce off posts like this. Not sure if you’d consider me net positive or not. :)
I do too, FWIW. I read this post and its comments because I’m considering donating to/through ACE, and I wanted to understand exactly what ACE did and what the context was. Reading through a sprawling, nearly 15k-word discussion mostly about social justice and discourse norms was not conducive to that goal.
Presumably knowing the basis of ACE’s evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.
Knowing the basis of ACE’s evaluations is of course essential to deciding whether to donate to/through them and I’d be surprised if esantorella disagreed. It’s just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.
Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn’t seem necessary to discuss that fully here.
I am glad to have you around, of course.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I’d be very interested to hear I was wrong about that.
I think that isn’t the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we’re running either of these counterfactuals I think we’re already leaving a bunch of value on the table, as expressed by EricHerboso’s post about false dilemmas.
I would guess it depends quite a bit on these people’s total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is “not for them”).
If we’re imagining people who’ve already had 10 or even 100 hours of total EA exposure, then I’m inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I’m at least a bit more into “try hard to avoid people bouncing for reasons unrelated to actual goal misalignment” than you.)
I’m less sure for people who are super new to EA as a school of thought or community.
We don’t need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I’m fairly sure that I had encountered both GiveWell’s website and Overcoming Bias years before I actually got into EA. At that time I didn’t understand what they were really about, and from skimming they didn’t clear my bar of “this seems worth engaging with”. I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like “whatever I’m not that interested in malaria [or whatever the page was about]”. Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we’re discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person’s properties, with the amount of prior exposure being a key one. I’m skeptical that any blanket statement of type “it’s OK if people bounce for reason X” will do a good job at describing a good strategy for dealing with this issue.
I agree it’s good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we’re already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it’s only tangentially relevant to the OP.
Aside: I don’t know about Will’s intentions, I just read his comment and your reply, and don’t think ‘he could have made a different comment’ is good evidence of his intentions. I’m going to assume you know much more about the situation/background than I do, but if not I do think it’s important to give people benefit of the doubt on the question of intentions.
[Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]
I appreciate you trying to find our true disagreement here.