I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.
I’ll limit myself to one (multi-part) follow-up question for now —
Suppose someone in our community decides not to defer to the claimed “scientific consensus” on this issue (which I’ve seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?
I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).
Let’s assume they try to distinguish between the two senses of “racism” that you mention, and try to treat all people respectfully and fairly. They don’t make a point of trumpeting their conclusion, since it’s not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.
Let’s say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.
Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?
Would you have any other advice for them, other than maybe something like, “Check your work again. You must have made a mistake. There’s an error in your thinking somewhere.”?
I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here—there are lots of considerations at play.
One view I hold, though, is something like “the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you’re considering the [personal/community-level] social implications thereof, is non-zero.” We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don’t think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn’t “get it” yet, with cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it’s hard to be sure, and I don’t think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker’s view, are true. Again, that’s consistent with people disagreeing about particular cases, just sharing my general view here.
This shouldn’t only work in one ideological “direction” of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I’m not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the “I like this statement and think it is true” line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I’m making, and because I don’t want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.