Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.
I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
However, couching demands for an AGI moratorium in terms of “reducing x-risk” rather than “reducing suffering” seems
More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
More effective in communicating catastrophic AI risk to the public
Making people happy is valuable; making happy people is probably not valuable. There is an asymmetry between suffering and happiness because it is more morally important to mitigate suffering than to create happiness.
Wait, how does your 93% disagree tie in with your support for PauseAI?
I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
However, couching demands for an AGI moratorium in terms of “reducing x-risk” rather than “reducing suffering” seems
More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
More effective in communicating catastrophic AI risk to the public