Raemon—I strongly agree, and I don’t think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.
OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being.
Therefore, instead of 80k Hours advertising jobs at such companies, which does give them our EA seal of moral approval, we should be morally stigmatizing them, denouncing them, and discouraging people from working with them.
If we adopt a ‘sophisticated’, ‘balanced’, mealy-mouthed approach where we kinda sorta approve of them recruiting EAs, but only in particular kinds of safety roles, in hope of influencing their management from the inside, we are likely to (1) fail to influence management, and (2) undermine our ability to use a moral stigmatization strategy to slow or pause AGI development.
In my opinion, if EAs banded together to advocate an immediate pause on any further AGI development, and adopted a public-relations strategy of morally stigmatizing any work in the AI industry, we would be much more likely to reduce AI extinction risk, than if spend our time trying to play 4-D chess in figuring out how to influence AI companies from the inside.
Some industries are simply evil and reckless, and it’s good for us to say so.
Let’s be honest with ourselves. The strategy we’ve followed for a decade, of trying to influence AI companies from the inside, to slow capabilities development and to promote AI alignment work, has failed. The strategy of trying to promote government regulation to slow reckless AI development is showing some signs of success, but is probably too slow to actually inhibit AI capabilities development. This leaves the informal public-relations strategy of stigmatizing the industry, to reduce up its funding, reduce its access to talent, and to make it morally embarrassing rather than cool to work in AI.
But EAs can only pursue the moral stigmatization strategy to slow AGI development if we are crystal clear that working on AGI development is a moral evil that we cannot endorse.
Raemon—I strongly agree, and I don’t think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.
OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being.
Therefore, instead of 80k Hours advertising jobs at such companies, which does give them our EA seal of moral approval, we should be morally stigmatizing them, denouncing them, and discouraging people from working with them.
If we adopt a ‘sophisticated’, ‘balanced’, mealy-mouthed approach where we kinda sorta approve of them recruiting EAs, but only in particular kinds of safety roles, in hope of influencing their management from the inside, we are likely to (1) fail to influence management, and (2) undermine our ability to use a moral stigmatization strategy to slow or pause AGI development.
In my opinion, if EAs banded together to advocate an immediate pause on any further AGI development, and adopted a public-relations strategy of morally stigmatizing any work in the AI industry, we would be much more likely to reduce AI extinction risk, than if spend our time trying to play 4-D chess in figuring out how to influence AI companies from the inside.
Some industries are simply evil and reckless, and it’s good for us to say so.
Let’s be honest with ourselves. The strategy we’ve followed for a decade, of trying to influence AI companies from the inside, to slow capabilities development and to promote AI alignment work, has failed. The strategy of trying to promote government regulation to slow reckless AI development is showing some signs of success, but is probably too slow to actually inhibit AI capabilities development. This leaves the informal public-relations strategy of stigmatizing the industry, to reduce up its funding, reduce its access to talent, and to make it morally embarrassing rather than cool to work in AI.
But EAs can only pursue the moral stigmatization strategy to slow AGI development if we are crystal clear that working on AGI development is a moral evil that we cannot endorse.