Personally, I disagree strenuously. I’d like to see us say what we believe, and to hell with the consequences. I’m not naive enough to promote this as a strategy we should always use, but this is definitely one of these times. This is a post written by a not highly-influential public figure, on our own forum, and the reason for deleting it is because someone, somewhere might criticise it. If a post like this can’t stay up, what can?
There will always be critics. There are people out there who think the entirety of longtermism is just a smokescreen for us to give ourselves money to pretend to work on AI safety and feel good about ourselves—should we stop posting about AI safety?
If you believe it, stick to your guns. If we start deleting stuff on our own forums because someone, somewhere might disapprove of it, the critics have already won. I find the idea that people are suggesting this post be deleted to be quite alarming, actually. EA’s epistemics are what make us great. It’s more valuable than our large amounts of funding, because that epistemically virtuous approach to doing good is what inspired people to earn to give and give us that funding.
Yeah, I’m on the fence here. On the one hand, PR matters. No matter how nuanced, Twitter people will misconstrue this as eugenics/climate destroyers, and that’s bad news. Facts or rationality do not matter to most people, but they have most of the power. EA is already a weird movement, and it needs to pick it’s battles carefully.
On the other hand, you are correct in your post.
I’d say delete it, but I really wish we could use a classification system for PR infohazards like this, so that illegible projects, ala the CIA’s projects, could be done without public eyes on the project. Something like “whether it’s likely to generate bad PR” would be classified as an infohazard until the project is complete, when we can declassify it from an infohazard list.
I’ve been PMed to say I should delete this post as it doesn’t help EA’s image with regards to eugenics. Would welcome other people’s thoughts on this
Personally, I disagree strenuously. I’d like to see us say what we believe, and to hell with the consequences. I’m not naive enough to promote this as a strategy we should always use, but this is definitely one of these times. This is a post written by a not highly-influential public figure, on our own forum, and the reason for deleting it is because someone, somewhere might criticise it. If a post like this can’t stay up, what can?
There will always be critics. There are people out there who think the entirety of longtermism is just a smokescreen for us to give ourselves money to pretend to work on AI safety and feel good about ourselves—should we stop posting about AI safety?
If you believe it, stick to your guns. If we start deleting stuff on our own forums because someone, somewhere might disapprove of it, the critics have already won. I find the idea that people are suggesting this post be deleted to be quite alarming, actually. EA’s epistemics are what make us great. It’s more valuable than our large amounts of funding, because that epistemically virtuous approach to doing good is what inspired people to earn to give and give us that funding.
Yeah, I’m on the fence here. On the one hand, PR matters. No matter how nuanced, Twitter people will misconstrue this as eugenics/climate destroyers, and that’s bad news. Facts or rationality do not matter to most people, but they have most of the power. EA is already a weird movement, and it needs to pick it’s battles carefully.
On the other hand, you are correct in your post.
I’d say delete it, but I really wish we could use a classification system for PR infohazards like this, so that illegible projects, ala the CIA’s projects, could be done without public eyes on the project. Something like “whether it’s likely to generate bad PR” would be classified as an infohazard until the project is complete, when we can declassify it from an infohazard list.