Could you explain a bit more what you mean by “confidence to forge our own path”? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.
The costs of chasing good PR are larger than they first appear: at the start you’re just talking about things differently, but soon enough it distorts your epistemics.
At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.
Could you explain a bit more what you mean by “confidence to forge our own path”? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.
The costs of chasing good PR are larger than they first appear: at the start you’re just talking about things differently, but soon enough it distorts your epistemics.
At the same time, these actions make less of a difference than you might expect. Some people are just looking for a reason to criticism you and will find a different reason. People will still attack you based on what happened in the past.