So, uh, does it follow that human extinction [or another x-risk which is not an s-risk] realised could be desired in order to avoid an s-risk? (e.g. VHEMT)
Some people make the argument that the difference in suffering between a worst-case scenario (s-risk) and a business-as-usual scenario, is likely much larger than the difference in suffering between a business-as-usual scenario and a future without humans. This suggests focusing on ways to reduce s-risks rather than increasing extinction risk.
Personally, I suspect there’s a lot of overlap between risk factors for extinction risk and risk factors for s-risks. In a world where extinction is a serious possibility, it’s likely that there would be a lot of things that are very wrong, and these things could lead to even worse outcomes like s-risks or hyperexistential risks.
I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).
I think it definitely does, if we’re in a situation where an S-risk is on the horizon with some sufficient (<- subjective) probability. Also consider https://carado.moe/when-in-doubt-kill-everyone.html (and the author’s subsequent updates)
… of course, the whole question is subjective as in moral.
“You didn’t trust yourself,” Hirou whispered. “That’s why you had to touch the Sword of Good.”
So, uh, does it follow that human extinction [or another x-risk which is not an s-risk] realised could be desired in order to avoid an s-risk? (e.g. VHEMT)
Reposting a comment I made last week
I think theoretically you could compare (1) worlds with s-risk and (2) worlds without humans, and find that (2) is preferable to (1) - in a similar way to how no longer existing is better than going to hell. One problem is many actions that make (2) more likely seem to make (1) more likely. Another issue is that efforts spent on increasing the risk of (2) could instead be much better spent on reducing the risk of (1).
I think it definitely does, if we’re in a situation where an S-risk is on the horizon with some sufficient (<- subjective) probability. Also consider https://carado.moe/when-in-doubt-kill-everyone.html (and the author’s subsequent updates)
… of course, the whole question is subjective as in moral.