Ai ethicists often imply that certain people in AI are just using x-risk as a form of tech hype/PR. The idea is that signaling concern for AI x-risk might make it look like you are concerned and responsible, while simultaneously advertising how powerful your product is and implying it will be much more powerful soon so that people invest in your company. Bonus points are that you can signal trustworthiness and caring while ignoring current-day harms caused by your product, and you can still find justifications for going full steam ahead on development (such as saying we have to put the moral people at the head of the race).
I think this theory would explain the observed behavior here pretty well.
Ai ethicists often imply that certain people in AI are just using x-risk as a form of tech hype/PR. The idea is that signaling concern for AI x-risk might make it look like you are concerned and responsible, while simultaneously advertising how powerful your product is and implying it will be much more powerful soon so that people invest in your company. Bonus points are that you can signal trustworthiness and caring while ignoring current-day harms caused by your product, and you can still find justifications for going full steam ahead on development (such as saying we have to put the moral people at the head of the race).
I think this theory would explain the observed behavior here pretty well.