I don’t want to engage with your arguments. I strongly think you’re wrong, but it seems much less relevant to what I can contribute (or generally want to engage with) than the fact that you’ve posted that comment and people have upvoted it.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
If anyone here does believe in ideas that have caused a great amount of harm and will cause more if spread, they should not spread them. If that’s not the specific arguments that you think might be better and should be improved in such and such way but the views that you’re arguing about, don’t! If you want to do good, why would you ever, in our world, spread these views? If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
If you, as a rationalist, came up with a Basilisk that you thought actually works, thinking that it’s the truth that it works should be a really strong reason not to post it or talk about it, ever.
The feeling of successfully persuading people (or even just engaging in interesting arguments), as good as it might be, isn’t worth a single tragedy that will result from spreading this kind of ideas. Please think about the impact of your words. If people persuaded by what you say might do harm, don’t.
One day, if the kindest of rationalists do solve alignment and enough time passes for humanity to become educated and caring, the AI will tell us what the truth is without a chance of it doing any harm. If you’re right, you’ll be able to say, “I was right all along, and all these woke people were not, and my epistemology was awesome”. Before then, please, if anyone might believe you, don’t tell them what you consider to be the truth.
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
If you want to do good, why would you ever, in our world, spread these views?
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?
I don’t want to engage with your arguments. I strongly think you’re wrong, but it seems much less relevant to what I can contribute (or generally want to engage with) than the fact that you’ve posted that comment and people have upvoted it.
I don’t understand how this can happen on the EA Forum. Why would anyone believing in this and wanting to do good promote this?
If anyone here does believe in ideas that have caused a great amount of harm and will cause more if spread, they should not spread them. If that’s not the specific arguments that you think might be better and should be improved in such and such way but the views that you’re arguing about, don’t! If you want to do good, why would you ever, in our world, spread these views? If the impact of spreading these views is more tragedies happening, more suffering, and more people dying early, please consider these views an infohazard and don’t even talk about them unless you’re absolutely sure your views are not going to spread to people who’ll become more intolerant- or more violent.
If you, as a rationalist, came up with a Basilisk that you thought actually works, thinking that it’s the truth that it works should be a really strong reason not to post it or talk about it, ever.
The feeling of successfully persuading people (or even just engaging in interesting arguments), as good as it might be, isn’t worth a single tragedy that will result from spreading this kind of ideas. Please think about the impact of your words. If people persuaded by what you say might do harm, don’t.
One day, if the kindest of rationalists do solve alignment and enough time passes for humanity to become educated and caring, the AI will tell us what the truth is without a chance of it doing any harm. If you’re right, you’ll be able to say, “I was right all along, and all these woke people were not, and my epistemology was awesome”. Before then, please, if anyone might believe you, don’t tell them what you consider to be the truth.
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it’s a coin toss. And the same for the entirety of your reasoning. As an aside, I’d be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one’s mind in this manner.
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn’t do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and relative effects of different beliefs.
Finally, consider that some people just have a very strong aversion to the idea that a third party can have the moral and intellectual authority to tell them which thoughts are infohazards. If nothing else, that could help you understand how this can happen.
Personally – because I do, in fact, believe that you are profoundly wrong, that even historically these views did not contribute to much harm (despite much misinformation concocted by partisans: policies we know to be harmful are attributable to different systems of views); that, in general, any thesis about systematic relation in the pattern {views I don’t like}=>{atrocities} is highly suspect and should be scrutinized (e.g. with theists who attribute Stalin’s brutality to atheism, or derive all of morality from their particular religion); and that my views offer a reliable way to reduce the amount of suffering humans are subjected to, in many ways from optimizing allocation of funds to unlocking advances in medical and educational research to mitigating slander and gaslighting heaped upon hundreds of millions of innocent people.
Crucially, because I believe that, all that medium-term cost-benefit analysis aside, the process of maintaining views you assume are beneficial constitutes an X-risk (actually a family of different X-risks, in Bostrom’s own classification), by comprehensively corrupting the institution of science and many other institutions. In other words: I think there is no plausible scenario where we achieve substantially more human flourishing in a hundred years – or ever – while deluding ourselves about the blank slate; that it’s you who is infecting others with the “Basilisk” thought virus. And that, say, arguments about the terrible history of some tens of thousands of people whom Americans have tortured under the banner of eugenics – after abusing and murdering millions of people whilst being first ignorant, then in denial about natural selection – miss the point entirely, both the point of effective altruism and of rational debate.
This is an impossible standard and you probably know it. Risks of a given strategy must be assessed in the context of the full universe of its alternatives; else the party that gets to cherrypick which risks are worth bringing up can insist on arbitrary measures. By the way, I could provide nontrivial evidence that your views have contributed to making a great number of people more intolerant and more violent, and have caused thousands of excess deaths over the last three years; but, unlike your wholly hypothetical fearmongering, it’s likely to get me banned.
Indeed, I could ask in the same spirit: what makes people upvote you? If your logic of cherrypicking risks and demonizing comparative debate is sound, then why don’t they just disregard GiveWell and donate all of their savings to the first local pet shelter that gets to pester them with heart-rending imagery of suffering puppies? Maybe they like puppies to suffer?! This is not just manipulation: rising above such manipulation is the whole conceit of this movement, yet you commit it freely and to popular applause.
To make me or anyone like me change my mind, strong and honest empirical and consequentialist arguments addressing these points are required. But that’s exactly what you say is “much less relevant” than just demanding compliance. Well. I beg to differ.
For my part, I do not particularly hope to persuade you or anyone here, and guidelines say we should strive to limit ourselves to explaining the issue. Honestly it’s just interesting at this point, can you contemplate the idea of being wrong, not just about “HBD” but about its consequences, or are you the definition of a mindkilled fanatic who can’t take a detached view at his own sermon and see that it’s heavy on affirmation, light on evidence?