Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.
Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).
What is the most likely reason that s-risks are not worth working on?
Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.
Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).