I quite suspect people at Anthropic are already thinking of considerations like this when deciding what to do and am not sure that an anonymous post is needed here.
While I don’t like this post, I think someone should be writing a more detailed post along these lines to provide more context for people outside of Anthropic. It feels like many newer people in AI safety have positive feelings about Anthropic by default because of its association with EA and a post that causes people to think some more about it could be good.
I quite suspect people at Anthropic are already thinking of considerations like this when deciding what to do and am not sure that an anonymous post is needed here.
While I don’t like this post, I think someone should be writing a more detailed post along these lines to provide more context for people outside of Anthropic. It feels like many newer people in AI safety have positive feelings about Anthropic by default because of its association with EA and a post that causes people to think some more about it could be good.
I also don’t like this post and I’ve deleted most of it. But I do feel like this is quite important and someone needs to say it.