I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).
If the reasoning for their views isn’t obviously bad, I would guess that it’s “cool” to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.
(As an aside, I’m not sure what’s the definition/boundary of the “rationality community”, but obviously not all AI safety researchers are part of it.)
I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).
(As an aside, I’m not sure what’s the definition/boundary of the “rationality community”, but obviously not all AI safety researchers are part of it.)