These are all good points, but I suspect it could be a mistake for EA to focus too much on PR. Very important to listen carefully to people’s concerns, but I also think we need the confidence to forge our own path.
Could you explain a bit more what you mean by “confidence to forge our own path”? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.
The costs of chasing good PR are larger than they first appear: at the start you’re just talking about things differently, but soon enough it distorts your epistemics.
At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.
These are all good points, but I suspect it could be a mistake for EA to focus too much on PR. Very important to listen carefully to people’s concerns, but I also think we need the confidence to forge our own path.
Could you explain a bit more what you mean by “confidence to forge our own path”? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.
The costs of chasing good PR are larger than they first appear: at the start you’re just talking about things differently, but soon enough it distorts your epistemics.
At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.