This is a good question, but I worry you can make this argument about many ideas, and the cost of self-censorship is really not worth it. For example:
If we talk too much about how much animals are suffering, someone might conclude humans are evil
If we talk too much about superintelligence, someone might conclude AI is superior and deserves to outlive us
If we talk too much about the importance of the far future, a maximally evil supervillain could actually become more motivated to increase x-risk
As a semi-outsider working on the fringes of this community, my impression is that EA is way too concerned about what is good/bad to talk about. There are ideas, posts and words with negative EV in the short run, but I feel that’s all outweighed by the virtue of vigorous debate and capacity for free thinking.
On a more serious note, I am philosophically concerned about the argument “the possibility of s-risks implies we should actually increase x-risk”, and am actively working on this. Happy to talk more if it’s of mutual interest.
Thank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.
This is a good question, but I worry you can make this argument about many ideas, and the cost of self-censorship is really not worth it. For example:
If we talk too much about how much animals are suffering, someone might conclude humans are evil
If we talk too much about superintelligence, someone might conclude AI is superior and deserves to outlive us
If we talk too much about the importance of the far future, a maximally evil supervillain could actually become more motivated to increase x-risk
As a semi-outsider working on the fringes of this community, my impression is that EA is way too concerned about what is good/bad to talk about. There are ideas, posts and words with negative EV in the short run, but I feel that’s all outweighed by the virtue of vigorous debate and capacity for free thinking.
On a more serious note, I am philosophically concerned about the argument “the possibility of s-risks implies we should actually increase x-risk”, and am actively working on this. Happy to talk more if it’s of mutual interest.
Thank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.