Wow, it seems like a lot of people misconstrued this post as saying that we shouldnāt criticize EAs who work on cutting-edge AI capabilities. I included some confusing wording in the original version of this piece and have crossed it out. To be utterly clear, I am talking about people who work on AI safety at large AI labs.
Iām still confused, though: your key bolded āWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersā personal career choices unless those individuals ask them for feedbackā isnāt specific to āpeople who work on AI safety at large AI labsā? Maybe part of the reaction was people thinking you were talking about AI capabilities work, but I think part of it is also your arguments naturally applying to all sorts of harmful work?
āWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersā personal career choices unless those individuals ask them for feedbackā isnāt specific to āpeople who work on AI safety at large AI labsā?
Thatās true. It applies to a wide range of career decisions that could be considered āharmfulā or suboptimal from the point of view of EA, such as choosing to develop ML systems for a mental health startup instead of doing alignment work. (Iāve actually been told āyou should work on AI safetyā several times, even after I started my current job working on giving tech.)
Iām still confused, though: your key bolded āWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersā personal career choices unless those individuals ask them for feedbackā isnāt specific to āpeople who work on AI safety at large AI labsā? Maybe part of the reaction was people thinking you were talking about AI capabilities work, but I think part of it is also your arguments naturally applying to all sorts of harmful work?
Thatās true. It applies to a wide range of career decisions that could be considered āharmfulā or suboptimal from the point of view of EA, such as choosing to develop ML systems for a mental health startup instead of doing alignment work. (Iāve actually been told āyou should work on AI safetyā several times, even after I started my current job working on giving tech.)