But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn’t going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake “sticking with your beliefs” to be an overriding good, above believing what’s true, or acting kindly towards the world, or acting like serious members of a civilisation where we all need to work together.
There was recently a post on Less Wrong about the concept of information that is “infohazardous if true”.
Given the observed empirical effects of having certain beliefs about racial differences, it seems plausible to me that certain claims about racial differences fall into the “infohazardous if true” category.
I haven’t previously heard anyone in EA say that it’s vital for our epistemic integrity to freely discuss infohazards. I don’t see why this case should be different.
Far-right ideas have created enormous suffering over the past few centuries. As far as I know, we don’t have a great theory for how this happened. But it seems fairly clear that it has something to do with memetics—if far-right ideas remain on the fringe, they will do a limited amount of harm; if far-right ideas become politically dominant, there’s a chance they’ll do a great deal of harm.
So, it seems that the best way to prevent far-right ideas from doing a ton of harm is to keep them on the fringe. This is fundamentally a pretty scary thing, because memetics is poorly understood. It would be much better if we had a robust, principled method to guard against harms from far-right ideas. But I don’t think such a method exists. Until it does, we have to operate on a “best guess” basis.
Thanks for writing this!
I’ve seen some posts on this forum discussing HBD as an is/ought issue—something like: HBD is an “is”, racial inegalitarianism is an “ought”, and you can’t derive an ought from an is.
I used to find this argument really compelling, and I still think it’s powerful and underrated. But recently I’ve become more skeptical of it.
I think the is/ought boundary is not actually that firm. For example, consider the statement: “Most communities would be better off if adulterers received severe social sanction.”
You could argue this is an “ought” claim. A person who says “adultery is deeply immoral” is essentially saying we should apply severe social sanction to adulturers.
You could argue this is an “is” claim which is empirically testable. Define a welfare metric, identify some communities, randomly assign half to a “shame adulterers” condition, see how the welfare metric is affected.
In the same vein, even if you believe you’re a “high decoupler”, there’s a good chance you don’t decouple as much as you think. Advertising is a multi-billion dollar industry even though people claim ads don’t affect them. Humans are vulnerable to biases like the affect heuristic. We aren’t perfect logical reasoners, especially when tribal politics are involved. The “pipeline” you describe may go to show that lots of “high decoupler” types are low-decoupling in practice.
And, even if you believe you’re a “high decoupler”, you have to acknowledge that the world is full of “low decouplers”. I strongly agree with the arguments Coleman Hughes makes in this discussion with Charles Murray, re: negative societal effects of widespread HBD discussion.
I think a reasonable takeaway from the recent SBF tragedy is that on the margin, we should defer more to mainstream elite opinion (in SBF’s case, crypto skepticism). And mainstream elite opinion says you don’t talk about race & IQ. Maybe that’s an adaptive response to an information hazard. Chesterton’s Fence comes to mind.