That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/or deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could’ve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldn’t be extreme or would be survived
(That said, I’m not sure how convincing I found those particular parts—I might expand on this in another comment—and I’m a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasn’t caused a major update regarding the other two of those pathways
Which is fair enough—one post can’t cover everything, and you explicitly noted that you’re setting those matters aside for followup posts
Relatedly, I’m excited to read those followup posts!
I think “[the period before recovery might be only] on the order of 100 years” offers little protection if we think we’re living at an especially “hingey” time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But I’m more worried about either:
“Big Deal Events” happening during the “shaken up” period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we could’ve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we could’ve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/experience/societies, we cause unrecognised/not-cared-about large-scale suffering of nonhuman animals or some types of digital beings, …
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think we’re notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though I’m not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that that’s pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with ‘lesser catastrophes’ is significantly worse in the longterm than a world without them. I’m still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
[written quickly, sorry]
One indication of my views is this comment I made on Luisa’s post (emphasis added):
I think “[the period before recovery might be only] on the order of 100 years” offers little protection if we think we’re living at an especially “hingey” time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But I’m more worried about either:
“Big Deal Events” happening during the “shaken up” period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
See also https://forum.effectivealtruism.org/posts/qY5q2QTG44avBbNKn/modelling-the-odds-of-recovery-from-civilizational-collapse
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we could’ve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we could’ve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/experience/societies, we cause unrecognised/not-cared-about large-scale suffering of nonhuman animals or some types of digital beings, …
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think we’re notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though I’m not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that that’s pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with ‘lesser catastrophes’ is significantly worse in the longterm than a world without them. I’m still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.