But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
If you want to do good, why would you ever, in our world, spread these views?
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?