I’m quite confused about that too. I don’t know of any real statistics, but my informal impression is that almost everyone is on board with not speeding capabilities work. There’s the vague argument floating around that actively impeding capabilities work would do nothing but burn bridges (which doesn’t seem right in full generality since animal rights groups also manage to influence whole production chains to switch to more human methods that form a new market equilibrium), but all the pitches for AI safety work always stress all the ways in which the groups will be careful not to work on anything that might differentially benefit capabilities and will keep everything secret by default unless they’re very sure that it won’t enhance capabilities. So I think my intuition that this is the dominant view is probably not far off the mark.
But the recruiting for non-safety roles is (seemingly) in complete contradiction to that. That’s what I’m completely confused about. Maybe the idea is that the organizations can be pushed in safer directions if there are more safety-conscious people working at them, so that it’s good to recruit EAs into them, since they are more likely to be safety-conscious than random ML people. (But the EAs you’d want to recruit for that are not the usual ML EAs but probably rather ML EAs who are also really good at office politics.) Or maybe these these groups are actually very safety-conscious and are years ahead of everyone else and are only gradually releasing stuff that they’ve completed years ago to keep the investors happy but are keeping all the really dangerous stuff completely secret.
I’m quite confused about that too. I don’t know of any real statistics, but my informal impression is that almost everyone is on board with not speeding capabilities work. There’s the vague argument floating around that actively impeding capabilities work would do nothing but burn bridges (which doesn’t seem right in full generality since animal rights groups also manage to influence whole production chains to switch to more human methods that form a new market equilibrium), but all the pitches for AI safety work always stress all the ways in which the groups will be careful not to work on anything that might differentially benefit capabilities and will keep everything secret by default unless they’re very sure that it won’t enhance capabilities. So I think my intuition that this is the dominant view is probably not far off the mark.
But the recruiting for non-safety roles is (seemingly) in complete contradiction to that. That’s what I’m completely confused about. Maybe the idea is that the organizations can be pushed in safer directions if there are more safety-conscious people working at them, so that it’s good to recruit EAs into them, since they are more likely to be safety-conscious than random ML people. (But the EAs you’d want to recruit for that are not the usual ML EAs but probably rather ML EAs who are also really good at office politics.) Or maybe these these groups are actually very safety-conscious and are years ahead of everyone else and are only gradually releasing stuff that they’ve completed years ago to keep the investors happy but are keeping all the really dangerous stuff completely secret.