Many of the most pressing threats to the humanity are far more likely to cause collapse than be an outright existential threat with no ability for civilisation to recover.
This claim is not supported, and I think most people who study catastrophic risks (they already coined the acronym C-risk, sorry!) and x-risks would disagree with it.
In fact, civilization collapse is considered fairly unlikely by many, although Toby Ord thinks it hasn’t been properly explored (see is recent 80k interview).
AI in particular (which many believe is easily the largest x-risk) seems quite unlikely to cause civilization collapse or c-risk without also x-risk.
From what I understand, the loss of welfare is probably much less significant than the decreased ability to prevent X-risks. Although, since X-risks are thought to be mostly anthropogenic, civilization collapse could actually significantly reduce immediate x-risk.
In general, I believe the thinking goes that we lose quite a small fraction of the light cone over the course of, e.g., a few centuries… this is why things like “long reflection periods” seem like good ideas. I’m not sure anyone has tried to square that with simulation hypothesis or other unknown-unknown type x-risks, which seem like they should make us discount much more aggressively. I guess the idea there is probably that most of the utility lies in universes with long futures, so we should prioritize our effects on them.
I suspect someone who has more expertise on this topic might want to respond more thoroughly.
I just skimmed the post.
This claim is not supported, and I think most people who study catastrophic risks (they already coined the acronym C-risk, sorry!) and x-risks would disagree with it.
In fact, civilization collapse is considered fairly unlikely by many, although Toby Ord thinks it hasn’t been properly explored (see is recent 80k interview).
AI in particular (which many believe is easily the largest x-risk) seems quite unlikely to cause civilization collapse or c-risk without also x-risk.
From what I understand, the loss of welfare is probably much less significant than the decreased ability to prevent X-risks. Although, since X-risks are thought to be mostly anthropogenic, civilization collapse could actually significantly reduce immediate x-risk.
In general, I believe the thinking goes that we lose quite a small fraction of the light cone over the course of, e.g., a few centuries… this is why things like “long reflection periods” seem like good ideas. I’m not sure anyone has tried to square that with simulation hypothesis or other unknown-unknown type x-risks, which seem like they should make us discount much more aggressively. I guess the idea there is probably that most of the utility lies in universes with long futures, so we should prioritize our effects on them.
I suspect someone who has more expertise on this topic might want to respond more thoroughly.