I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
However, couching demands for an AGI moratorium in terms of “reducing x-risk” rather than “reducing suffering” seems
More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
More effective in communicating catastrophic AI risk to the public
Wait, how does your 93% disagree tie in with your support for PauseAI?
I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
However, couching demands for an AGI moratorium in terms of “reducing x-risk” rather than “reducing suffering” seems
More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
More effective in communicating catastrophic AI risk to the public