In what world could isolation and mutual dismissal between AIS&L people and people working on neartermist problems be helpful.
If the people you ultimately want to influence are the technophiles who are building AI, who regard most near-term ‘AI safety’ people as annoying scolds and culture warriors, it could be good to clearly differentiate yourself from them. If existential safety people get a reputation as reliable collaborators, employees and allies who don’t support the bad behaviour of many AI bias people this could put us in a good position.
I think I disagree with the general direction of this comment but it’s hard to state why, so I’ll just outline an alternative view:
Many people are building cutting-edge AI. Many of them are sympathetic to at least some safety and ethics concerns, and some are not that sympathetic to any safety or ethics concerns
Of course it is good to have a reputation as a good collaborator and employee. It seems only instrumentally valuable to be an “ally” to the cutting edge research, and at some point you have to be honest and tell those building AI that what they’re doing is interesting but has risks in addition to potential upsides
Part of building a good reputation in the field involves honestly assessing others’ work. If you agree with work from AI safety or AI ethics or AI bias people, you should just agree with them. If you disagree with their work, you should just disagree with them. “Distancing” and “aligning” yourself with certain camps is the kind of strategic move that people in research labs often view as vaguely dishonest or overly political
Part of building a good reputation in the field involves honestly assessing others’ work. If you agree with work from AI safety or AI ethics or AI bias people, you should just agree with them. If you disagree with their work, you should just disagree with them.
Yes, I agree with this. I think in general there is a fair bit of social pressure to give credence to intellectually weak concerns about ‘AI bias’ etc., which is part of what technophiles dislike, even if they can’t say so publicly. Pace your first sentence, I think that self-censorship is helpful for building reputation in some fields. As such, I expect honestly reporting an epistemically rigourous evaluation of these arguments will often suffice to cause ‘isolation and mutual dismissal’ from Gebru-types, even while it is positive for your reputation among ‘builder’ capabilities researchers.
Note that in general existential safety people have put a fair bit of effort into trying to cultivate good relations with near-term AI safety people. The lowest hanging fruit implied by the argument above is to simply pull back on these activities.
If the people you ultimately want to influence are the technophiles who are building AI, who regard most near-term ‘AI safety’ people as annoying scolds and culture warriors, it could be good to clearly differentiate yourself from them. If existential safety people get a reputation as reliable collaborators, employees and allies who don’t support the bad behaviour of many AI bias people this could put us in a good position.
I think I disagree with the general direction of this comment but it’s hard to state why, so I’ll just outline an alternative view:
Many people are building cutting-edge AI. Many of them are sympathetic to at least some safety and ethics concerns, and some are not that sympathetic to any safety or ethics concerns
Of course it is good to have a reputation as a good collaborator and employee. It seems only instrumentally valuable to be an “ally” to the cutting edge research, and at some point you have to be honest and tell those building AI that what they’re doing is interesting but has risks in addition to potential upsides
Part of building a good reputation in the field involves honestly assessing others’ work. If you agree with work from AI safety or AI ethics or AI bias people, you should just agree with them. If you disagree with their work, you should just disagree with them. “Distancing” and “aligning” yourself with certain camps is the kind of strategic move that people in research labs often view as vaguely dishonest or overly political
Yes, I agree with this. I think in general there is a fair bit of social pressure to give credence to intellectually weak concerns about ‘AI bias’ etc., which is part of what technophiles dislike, even if they can’t say so publicly. Pace your first sentence, I think that self-censorship is helpful for building reputation in some fields. As such, I expect honestly reporting an epistemically rigourous evaluation of these arguments will often suffice to cause ‘isolation and mutual dismissal’ from Gebru-types, even while it is positive for your reputation among ‘builder’ capabilities researchers.
Note that in general existential safety people have put a fair bit of effort into trying to cultivate good relations with near-term AI safety people. The lowest hanging fruit implied by the argument above is to simply pull back on these activities.