I’d like to add that I think there are ways in which safety work gets done without people working on ‘AI safety’. This isn’t in conflict with what you said, but it does mean that people who want to work on safety could not go on to work on it but there are still people doing the jobs of AI safety researchers.
It seems plausible to me that a person could end up working on AI and economic incentives push them to work on a topic related to safety (e.g. google want to build TAI → they want to understand what is going on in their deep neural nets better → they get some ai researcher to work on interpretability → [AI becomes a bit interpretable and possibly more safe]). I guess that in this case, the people may not be thinking about safety but if they are doing the jobs of the people that would then I don’t think it really matters.
I do think that people should work on AI safety on net, but this seems like a reasonable counterargument.
I’d like to add that I think there are ways in which safety work gets done without people working on ‘AI safety’. This isn’t in conflict with what you said, but it does mean that people who want to work on safety could not go on to work on it but there are still people doing the jobs of AI safety researchers.
It seems plausible to me that a person could end up working on AI and economic incentives push them to work on a topic related to safety (e.g. google want to build TAI → they want to understand what is going on in their deep neural nets better → they get some ai researcher to work on interpretability → [AI becomes a bit interpretable and possibly more safe]). I guess that in this case, the people may not be thinking about safety but if they are doing the jobs of the people that would then I don’t think it really matters.
I do think that people should work on AI safety on net, but this seems like a reasonable counterargument.