Nonexistence is preferable to intense suffering, and I think there are enough S-risks associated with the array of possible futures ahead of us that we should prioritize reducing S-risks over X-risks, except when reducing X-risks is instrumental to reducing S-risks. So to be specific, I would only agree with this to the extent that “value” == lack of suffering—I do not think we should build for the utopia that might not come to pass because we wipe ourselves out first, just that it is vastly more important to prevent dystopia
Nonexistence is preferable to intense suffering, and I think there are enough S-risks associated with the array of possible futures ahead of us that we should prioritize reducing S-risks over X-risks, except when reducing X-risks is instrumental to reducing S-risks. So to be specific, I would only agree with this to the extent that “value” == lack of suffering—I do not think we should build for the utopia that might not come to pass because we wipe ourselves out first, just that it is vastly more important to prevent dystopia