Saying terrifying things can be costly, both socially and reputationally (and there’s also the possible side effect of, well, making people terrified).
Is this the case in the AI safety community? If the reasoning for their views isn’t obviously bad, I would guess that it’s “cool” to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.
I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).
If the reasoning for their views isn’t obviously bad, I would guess that it’s “cool” to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.
(As an aside, I’m not sure what’s the definition/boundary of the “rationality community”, but obviously not all AI safety researchers are part of it.)
Is this the case in the AI safety community? If the reasoning for their views isn’t obviously bad, I would guess that it’s “cool” to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.
I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).
(As an aside, I’m not sure what’s the definition/boundary of the “rationality community”, but obviously not all AI safety researchers are part of it.)