Wow, it seems like a lot of people misconstrued this post as saying that we shouldnât criticize EAs who work on cutting-edge AI capabilities. I included some confusing wording in the original version of this piece and have crossed it out. To be utterly clear, I am talking about people who work on AI safety at large AI labs.
Iâm still confused, though: your key bolded âWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersâ personal career choices unless those individuals ask them for feedbackâ isnât specific to âpeople who work on AI safety at large AI labsâ? Maybe part of the reaction was people thinking you were talking about AI capabilities work, but I think part of it is also your arguments naturally applying to all sorts of harmful work?
âWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersâ personal career choices unless those individuals ask them for feedbackâ isnât specific to âpeople who work on AI safety at large AI labsâ?
Thatâs true. It applies to a wide range of career decisions that could be considered âharmfulâ or suboptimal from the point of view of EA, such as choosing to develop ML systems for a mental health startup instead of doing alignment work. (Iâve actually been told âyou should work on AI safetyâ several times, even after I started my current job working on giving tech.)
Iâm still confused, though: your key bolded âWhile it is fine to criticize organizations in the EA community for actions that may cause harm, EAs should avoid scrutinizing other community membersâ personal career choices unless those individuals ask them for feedbackâ isnât specific to âpeople who work on AI safety at large AI labsâ? Maybe part of the reaction was people thinking you were talking about AI capabilities work, but I think part of it is also your arguments naturally applying to all sorts of harmful work?
Thatâs true. It applies to a wide range of career decisions that could be considered âharmfulâ or suboptimal from the point of view of EA, such as choosing to develop ML systems for a mental health startup instead of doing alignment work. (Iâve actually been told âyou should work on AI safetyâ several times, even after I started my current job working on giving tech.)