Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).
Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).