Part of building a good reputation in the field involves honestly assessing others’ work. If you agree with work from AI safety or AI ethics or AI bias people, you should just agree with them. If you disagree with their work, you should just disagree with them.
Yes, I agree with this. I think in general there is a fair bit of social pressure to give credence to intellectually weak concerns about ‘AI bias’ etc., which is part of what technophiles dislike, even if they can’t say so publicly. Pace your first sentence, I think that self-censorship is helpful for building reputation in some fields. As such, I expect honestly reporting an epistemically rigourous evaluation of these arguments will often suffice to cause ‘isolation and mutual dismissal’ from Gebru-types, even while it is positive for your reputation among ‘builder’ capabilities researchers.
Note that in general existential safety people have put a fair bit of effort into trying to cultivate good relations with near-term AI safety people. The lowest hanging fruit implied by the argument above is to simply pull back on these activities.
Yes, I agree with this. I think in general there is a fair bit of social pressure to give credence to intellectually weak concerns about ‘AI bias’ etc., which is part of what technophiles dislike, even if they can’t say so publicly. Pace your first sentence, I think that self-censorship is helpful for building reputation in some fields. As such, I expect honestly reporting an epistemically rigourous evaluation of these arguments will often suffice to cause ‘isolation and mutual dismissal’ from Gebru-types, even while it is positive for your reputation among ‘builder’ capabilities researchers.
Note that in general existential safety people have put a fair bit of effort into trying to cultivate good relations with near-term AI safety people. The lowest hanging fruit implied by the argument above is to simply pull back on these activities.