That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/âor deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than couldâve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldnât be extreme or would be survived
(That said, Iâm not sure how convincing I found those particular partsâI might expand on this in another commentâand Iâm a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasnât caused a major update regarding the other two of those pathways
Which is fair enoughâone post canât cover everything, and you explicitly noted that youâre setting those matters aside for followup posts
Relatedly, Iâm excited to read those followup posts!
I think â[the period before recovery might be only] on the order of 100 yearsâ offers little protection if we think weâre living at an especially âhingeyâ time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But Iâm more worried about either:
âBig Deal Eventsâ happening during the âshaken upâ period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we couldâve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we couldâve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/âexperience/âsocieties, we cause unrecognised/ânot-cared-about large-scale suffering of nonhuman animals or some types of digital beings, âŚ
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think weâre notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though Iâm not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that thatâs pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with âlesser catastrophesâ is significantly worse in the longterm than a world without them. Iâm still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.
[written quickly, sorry]
One indication of my views is this comment I made on Luisaâs post (emphasis added):
I think â[the period before recovery might be only] on the order of 100 yearsâ offers little protection if we think weâre living at an especially âhingeyâ time; a lot could happen in this specific coming 100 years, and the state society is in when those key events happen could be a really big deal.
Also, I agree that society simply remains small or technologically stagnant or whatever indefinitely seems very unlikely. But Iâm more worried about either:
âBig Deal Eventsâ happening during the âshaken upâ period (all very roughly speaking, of course!) and thus being handled worse, or
Failure to recover on some other dimensions of civilization, e.g. political and moral progress
See also https://ââforum.effectivealtruism.org/ââposts/ââqY5q2QTG44avBbNKn/ââmodelling-the-odds-of-recovery-from-civilizational-collapse
Background thought: I think the potential value of the future is probably ridiculously huge, and there are probably many plausible futures where humanity survives for millions of years and advances technologically past the current frontiers and nothing seems obviously horrific, but we still fall massively short of how much good we couldâve achieved. E.g., we choose to stay on earth or in the solar system forever, we spread to other solar systems but still through far less of the universe than we couldâve, we never switch to more efficient digital minds, we never switch to something close to the best kind of digital minds having the best kind of lives/âexperience/âsocieties, we cause unrecognised/ânot-cared-about large-scale suffering of nonhuman animals or some types of digital beings, âŚ
So I think we might need to chart a careful course through the future, not just avoiding the super obvious pitfalls. And for various fuzzy reasons, I tentatively think weâre notably less likely to chart the right course following a huge but not-immediately-existential catastrophe than if we avoid such catastrophes, though Iâm not very confident about that.
Thanks, this is really helpful. I think a hidden assumption in my head was that the hingey time is put on hold while civilization recovers, but now I see that thatâs pretty questionable.
I also share your feeling that, for fuzzy reasons, a world with âlesser catastrophesâ is significantly worse in the longterm than a world without them. Iâm still trying to bring those reasons into focus, though, and think this could be a really interesting direction for future research.