Thanks for this post! It has definitely left me with a more fleshed out picture of some key considerations for this (in my opinion) important and neglected topic, and of how likely various things might be.
Overall, the post made me a bit less concerned about global catastrophes / civilizational collapse scenarios that don’t immediately involve/represent existential catastrophes.
This is because some portion of my tentative concern about such events came from the following sort of reasoning:
“Well, an event like that would be massive and unprecedented, and seems intuitively like the sort of thing that might have permanent and awful outcomes.
And I don’t really have a fleshed-out picture of how an event like that would play out, nor have I seen fleshed-out reasons to believe it wouldn’t have such outcomes.
So I shouldn’t be super confident it wouldn’t have such outcomes, even if I didn’t have in mind specific reasons why it would.”
And this post provided what seem like strong reasons to believe an event like that wouldn’t relatively directly lead to extinction, which is one of the major pathways by which it could’ve theoretically led to existential catastrophe.
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/or deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could’ve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldn’t be extreme or would be survived
(That said, I’m not sure how convincing I found those particular parts—I might expand on this in another comment—and I’m a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasn’t caused a major update regarding the other two of those pathways
Which is fair enough—one post can’t cover everything, and you explicitly noted that you’re setting those matters aside for followup posts
Relatedly, I’m excited to read those followup posts!
I’d be really interested to hear whether, overall, doing this research has updated you personally towards thinking we should prioritise interventions related to risks like nuclear war and climate change less, while prioritising AI risk (and maybe some other things?) more?
And if so, by how much?
And has it led to any other key changes in your bottom-line beliefs about what we should do?
(Btw, I think it’s also totally ok for many pieces of research to help us progress towards updates to our bottom-line beliefs about what we should do without themselves causing major updates yet. So fair enough if your answers are “Not really” or “Ask again in a couple months”!)
I’ll leave some other thoughts in separate comments. (Some—like the above—will partially repeat things we discussed already, but most will be on parts you added since the version of this doc I read.)
Thanks for this post! It has definitely left me with a more fleshed out picture of some key considerations for this (in my opinion) important and neglected topic, and of how likely various things might be.
Overall, the post made me a bit less concerned about global catastrophes / civilizational collapse scenarios that don’t immediately involve/represent existential catastrophes.
This is because some portion of my tentative concern about such events came from the following sort of reasoning:
“Well, an event like that would be massive and unprecedented, and seems intuitively like the sort of thing that might have permanent and awful outcomes.
And I don’t really have a fleshed-out picture of how an event like that would play out, nor have I seen fleshed-out reasons to believe it wouldn’t have such outcomes.
So I shouldn’t be super confident it wouldn’t have such outcomes, even if I didn’t have in mind specific reasons why it would.”
And this post provided what seem like strong reasons to believe an event like that wouldn’t relatively directly lead to extinction, which is one of the major pathways by which it could’ve theoretically led to existential catastrophe.
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/or deployment of dangerous technologies) that ultimately lead to extinction
Lead to “unrecoverable collapse” / “permanent stagnation”
Lead to “unrecoverable dystopia”
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could’ve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhat less concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldn’t be extreme or would be survived
(That said, I’m not sure how convincing I found those particular parts—I might expand on this in another comment—and I’m a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasn’t caused a major update regarding the other two of those pathways
Which is fair enough—one post can’t cover everything, and you explicitly noted that you’re setting those matters aside for followup posts
Relatedly, I’m excited to read those followup posts!
I’d be really interested to hear whether, overall, doing this research has updated you personally towards thinking we should prioritise interventions related to risks like nuclear war and climate change less, while prioritising AI risk (and maybe some other things?) more?
And if so, by how much?
And has it led to any other key changes in your bottom-line beliefs about what we should do?
(Btw, I think it’s also totally ok for many pieces of research to help us progress towards updates to our bottom-line beliefs about what we should do without themselves causing major updates yet. So fair enough if your answers are “Not really” or “Ask again in a couple months”!)
I’ll leave some other thoughts in separate comments. (Some—like the above—will partially repeat things we discussed already, but most will be on parts you added since the version of this doc I read.)
FYI, broken link here:
Oh, good catch, thanks! I accidentally linked to the title rather than the url. Now fixed.