The salient question for me is how much does reducing extinction risk change the long run experience of moral patients? One argument is that meaningfully reducing risk would require substantial coordination, and that coordination is likely to result in better worlds. I think it is as or more likely that reducing extinction risk can result in some worlds where most moral patients are used as means without regard to their suffering.
I think an AI aligned to roughly to the output of all current human coordination would be net-negative. I would shift to thinking addressing extinction risk is more important if factory farming stopped, humanity was taking serious steps to address wild animal suffering, all sustainable development goals were met within 5 years of the initial timeline, and global inequality was reduced to something like <0.25 GINI coefficient.
The salient question for me is how much does reducing extinction risk change the long run experience of moral patients? One argument is that meaningfully reducing risk would require substantial coordination, and that coordination is likely to result in better worlds. I think it is as or more likely that reducing extinction risk can result in some worlds where most moral patients are used as means without regard to their suffering.
I think an AI aligned to roughly to the output of all current human coordination would be net-negative. I would shift to thinking addressing extinction risk is more important if factory farming stopped, humanity was taking serious steps to address wild animal suffering, all sustainable development goals were met within 5 years of the initial timeline, and global inequality was reduced to something like <0.25 GINI coefficient.