Thanks for this post! It has definitely left me with a more fleshed out picture of some key considerations for this (in my opinion) important and neglected topic, and of how likely various things might be.
Overall, the post made me a bit less concerned about global catastrophes /â civilizational collapse scenarios that donât immediately involve/ârepresent existential catastrophes.
This is because some portion of my tentative concern about such events came from the following sort of reasoning:
âWell, an event like that would be massive and unprecedented, and seems intuitively like the sort of thing that might have permanent and awful outcomes.
And I donât really have a fleshed-out picture of how an event like that would play out, nor have I seen fleshed-out reasons to believe it wouldnât have such outcomes.
So I shouldnât be super confident it wouldnât have such outcomes, even if I didnât have in mind specific reasons why it would.â
And this post provided what seem like strong reasons to believe an event like that wouldnât relatively directly lead to extinction, which is one of the major pathways by which it couldâve theoretically led to existential catastrophe.
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/âor deployment of dangerous technologies) that ultimately lead to extinction
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than couldâve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhatless concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldnât be extreme or would be survived
(That said, Iâm not sure how convincing I found those particular partsâI might expand on this in another commentâand Iâm a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasnât caused a major update regarding the other two of those pathways
Which is fair enoughâone post canât cover everything, and you explicitly noted that youâre setting those matters aside for followup posts
Relatedly, Iâm excited to read those followup posts!
Iâd be really interested to hear whether, overall, doing this research has updated you personally towards thinking we should prioritise interventions related to risks like nuclear war and climate change less, while prioritising AI risk (and maybe some other things?) more?
And if so, by how much?
And has it led to any other key changes in your bottom-line beliefs about what we should do?
(Btw, I think itâs also totally ok for many pieces of research to help us progress towards updates to our bottom-line beliefs about what we should do without themselves causing major updates yet. So fair enough if your answers are âNot reallyâ or âAsk again in a couple monthsâ!)
Iâll leave some other thoughts in separate comments. (Someâlike the aboveâwill partially repeat things we discussed already, but most will be on parts you added since the version of this doc I read.)
Thanks for this post! It has definitely left me with a more fleshed out picture of some key considerations for this (in my opinion) important and neglected topic, and of how likely various things might be.
Overall, the post made me a bit less concerned about global catastrophes /â civilizational collapse scenarios that donât immediately involve/ârepresent existential catastrophes.
This is because some portion of my tentative concern about such events came from the following sort of reasoning:
âWell, an event like that would be massive and unprecedented, and seems intuitively like the sort of thing that might have permanent and awful outcomes.
And I donât really have a fleshed-out picture of how an event like that would play out, nor have I seen fleshed-out reasons to believe it wouldnât have such outcomes.
So I shouldnât be super confident it wouldnât have such outcomes, even if I didnât have in mind specific reasons why it would.â
And this post provided what seem like strong reasons to believe an event like that wouldnât relatively directly lead to extinction, which is one of the major pathways by which it couldâve theoretically led to existential catastrophe.
That said, I think that, personally, my main reasons for concern about such events was in any case not that they might fairly directly lead to extinction
Rather, it was that such events might:
Trigger other bad events (e.g., further conflict, development and/âor deployment of dangerous technologies) that ultimately lead to extinction
Lead to âunrecoverable collapseâ /â âpermanent stagnationâ
Lead to âunrecoverable dystopiaâ
Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than couldâve been achieved. One of many ways this could occur is negative changes in values.
(I think my views on this are pretty similar to those Beckstead expresses here)
I think this post has updated me towards somewhat less concern about such events causing extinction by triggering other bad events
This is partly because you provide some arguments that conflict in the aftermath wouldnât be extreme or would be survived
(That said, Iâm not sure how convincing I found those particular partsâI might expand on this in another commentâand Iâm a bit confused about why WMDs were mentioned in Case 2 but not Case 1 or Case 3.)
But it hasnât caused a major update regarding the other two of those pathways
Which is fair enoughâone post canât cover everything, and you explicitly noted that youâre setting those matters aside for followup posts
Relatedly, Iâm excited to read those followup posts!
Iâd be really interested to hear whether, overall, doing this research has updated you personally towards thinking we should prioritise interventions related to risks like nuclear war and climate change less, while prioritising AI risk (and maybe some other things?) more?
And if so, by how much?
And has it led to any other key changes in your bottom-line beliefs about what we should do?
(Btw, I think itâs also totally ok for many pieces of research to help us progress towards updates to our bottom-line beliefs about what we should do without themselves causing major updates yet. So fair enough if your answers are âNot reallyâ or âAsk again in a couple monthsâ!)
Iâll leave some other thoughts in separate comments. (Someâlike the aboveâwill partially repeat things we discussed already, but most will be on parts you added since the version of this doc I read.)
FYI, broken link here:
Oh, good catch, thanks! I accidentally linked to the title rather than the url. Now fixed.