sammyboiz—I strongly agree. Thanks for writing this.
There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don’t care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say ‘Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this’. It will not happen.
Alternative strategies include formal governance work. But they also include grassroots activism, and informal moral stigmatization of AI research. I think of PauseAI as doing more of the last two, rather than just focusing on ‘governance’ per se.
As I’ve often argued, if EAs seriously think that AGI is an extinction risk, and that the AI companies seeking AGI cannot be trusted to slow down or pause until they solve the alignment and control problems, then our only realistic option is to use social, cultural, moral, financial, and government pressure to stop them. Now.
sammyboiz—I strongly agree. Thanks for writing this.
There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don’t care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say ‘Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this’. It will not happen.
Alternative strategies include formal governance work. But they also include grassroots activism, and informal moral stigmatization of AI research. I think of PauseAI as doing more of the last two, rather than just focusing on ‘governance’ per se.
As I’ve often argued, if EAs seriously think that AGI is an extinction risk, and that the AI companies seeking AGI cannot be trusted to slow down or pause until they solve the alignment and control problems, then our only realistic option is to use social, cultural, moral, financial, and government pressure to stop them. Now.
Thanks for your comment!