I strongly think you’re wrong
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
If you want to do good, why would you ever, in our world, spread these views?
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?
I don’t want to be rude, but this appears to be just shoddy overuse of rationalist lingo in the name of shoehorning a myopic and empirically unsupported political agenda into the consequentialist framework.
What observed empirical effects? You link to a very strange post saying, concretely, that
This person has had a falling-out with their friends who believe HBD, apparently because they have come to harbor other right-wing ideas poorly compatible with aspects of this person’s identity and lifesyle.
Those friends had drifted to the right because they felt persecuted “by people on the left or center-left” due to them believing HBD.
This person had concluded that HBD is pseudoscientific, by virtue of right-wingers being nasty to trans people and vegans.
Pardon me, what? Is this your evidence base?
№1-2 might as well be considered arguments for lesser demonization of HBD. There is nothing inherently political about thinking one way or another about sources of cognitive differences; the political valence is imposed on such hypotheses by external forces. If smart people independently arrive at HBD as a morally neutral explanation for generally available observations, then it’s not very prudent on part of “the left or center-left” to baselessly label them racists, supporters of genocidal far-right ideologies, insane cranks and such and leave them no choice except break their own minds into an Orwellian mold, learn to live in falsehood, or go rightward. When they say, like Bostrom, that they are motivated by humanitarian impulses, they can be taken at their word.
You, however, seem to conclude that the only problem is insufficient intensity of vilification of HBD, now as a “cause area” unto itself; that these people can be intimidated into not believing what they see, through pure peer pressure and pushing the topic to the fringe instead of rational persuasion.
№3 is honestly horrifying in terms of epistemic integrity. You seem to be dismissive of truth as a terminal value, so let’s put it like this: a person who sees nothing wrong with such pseudoreasoning – and, given the score, that’s normal on EA forum –can delude oneself into excusing arbitrary atrocities; or less dangerously, draining resources into arbitrarily ineffective causes just to feel good about oneself.
We don’t have a good theory, in part, because there’s no meaningful way to lump together “far-right ideas” over “the past few centuries”, or indeed seriously analyze anything prior to the 20th century through these lens. Do you mean Jacobites or Bourbons by far-right? Why not address la Terreur as an archetypal case of the idea of egalitarianism causing mass death and suffering in the characteristic manner of an infohazard? Should this make us suspicious of egalitarian ideation in general?
Here’s a honest thought: the notion of “memetics” or “infohazards” is an infohazard in its own right. It’s bad philosophy, and it offers zero explanatory power over traditional terms like “undeservedly popular idea”, “misleading idea” or “dangerous idea” but it gives the false impression of such adjectives having been substantiated. It’s just a way of whitewashing a classical illiberal and, indeed, totalitarian belief that some ideas must be kept away from the plebeians because they are akin to a plague. In illiberal societies those are “democracy” and “independent thought”; we have a consensus that theories justifying restriction of access to those are vacuous and evil, but those theories at least had some substance, unlike equivocation here about suffering caused by “far right” and, by an entirely frivolous extension, HBD.
In sum, analogizing ideas and their bearers to infectious agents invading and spreading within the body politic is a staple of far-right sociology that exploits deep-seated reactions of disease-associated disgust, fear and distrust of outsiders, and that’s all there is to “memetics” in such colloquial use. Perhaps you could do without resorting to such tools for thought.
Perhaps there is, and it’s called “law” and “democracy”, and you need to argue in a principled way for your cost-benefit analysis that concludes that extant legal and political checks against far-right threats are insufficient, and concludes with embracing of some of the worst totalitarian legacies to ostracize an apparent scientific truth.