I upvoted this post and think it’s a good contribution. The EA community as a whole has done damage to itself the past few days. But I’m worried about what it would mean to support having less epistemic integrity as a community.
This post says both:
If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs.
and
If he’d said, for instance, “hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn’t make sense. It’s closing, but not fast enough. We should work harder on fixing this.” That would be more sensible. Same for the community itself disavowing the explicit racism.
The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.[1]
For my own part, I will trust someone less if they endorse statements they think are false. I would also trust someone less if they seemed weirdly keen on having discussions that kinda seem racist. Unfortunately, it seems we’re basically having to decide between these two options.
My preferred solution is to—while being as clear as possible about the context, and taking great care not to cause undue harm—maintain epistemic integrity. I think “compromising your ability to say true, relevant things in order to be trusted more” is the kind of galaxy-brain PR move that probably doesn’t work. You incur the cost of decreased epistemic integrity, and then don’t fool anyone else anyway. If I can lose someone’s trust by saying something true in a relevant context,[2] then keeping their trust was a fabricated option.
I’m left not knowing what this post wants me to do differently. When I’m in a relevant conversation, I’m not going to lie or dissemble about my beliefs, although I will do my best to present them empathetically and in a way that minimizes harm. But if the main thrust here is “focus somewhat less on epistemic integrity,” I’m not sure what a good version of that looks like in practice, and I’m quite worried about it being taken as an invitation to be less trustworthy in the interest of appearing more trustworthy.
I’ve seen other discussions where someone seems to both claim “the racial IQ gap is shrinking / has no genetic component / is environmentally caused” and “believing there is a racial IQ gap is, in itself, racist.”
I think another point of disagreement might be whether this has been a relevant context to discuss race and IQ. My position is that if you’re in a discussion about how to respond to a person saying X, you’re by necessity also in a discussion about whether X is true. You can’t have the first conversation and completely bracket the second, as the truth or falsity of X is relevant to whether believing X is worthy of criticism.
I upvoted this post and think it’s a good contribution. The EA community as a whole has done damage to itself the past few days. But I’m worried about what it would mean to support having less epistemic integrity as a community.
This post says both:
and
The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.[1]
For my own part, I will trust someone less if they endorse statements they think are false. I would also trust someone less if they seemed weirdly keen on having discussions that kinda seem racist. Unfortunately, it seems we’re basically having to decide between these two options.
My preferred solution is to—while being as clear as possible about the context, and taking great care not to cause undue harm—maintain epistemic integrity. I think “compromising your ability to say true, relevant things in order to be trusted more” is the kind of galaxy-brain PR move that probably doesn’t work. You incur the cost of decreased epistemic integrity, and then don’t fool anyone else anyway. If I can lose someone’s trust by saying something true in a relevant context,[2] then keeping their trust was a fabricated option.
I’m left not knowing what this post wants me to do differently. When I’m in a relevant conversation, I’m not going to lie or dissemble about my beliefs, although I will do my best to present them empathetically and in a way that minimizes harm. But if the main thrust here is “focus somewhat less on epistemic integrity,” I’m not sure what a good version of that looks like in practice, and I’m quite worried about it being taken as an invitation to be less trustworthy in the interest of appearing more trustworthy.
I’ve seen other discussions where someone seems to both claim “the racial IQ gap is shrinking / has no genetic component / is environmentally caused” and “believing there is a racial IQ gap is, in itself, racist.”
I think another point of disagreement might be whether this has been a relevant context to discuss race and IQ. My position is that if you’re in a discussion about how to respond to a person saying X, you’re by necessity also in a discussion about whether X is true. You can’t have the first conversation and completely bracket the second, as the truth or falsity of X is relevant to whether believing X is worthy of criticism.