Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.
Personally, I suspect there’s a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it’s likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.
I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).
Reposting a comment I made last week
I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).