One way in which existing work in this space strikes me as deficient is the absence of what you might call ārisk-firstā thinking.[15] By ārisk-firstā I mean an approach that begins with a relatively well-defined risk, or perhaps a category of risk, and proceeds to consider ways in which humanity might increase its degree of modularity with respect to that. [...] This contrasts with virtually all comments Iāve read relating to mitigation through modularity, in that these tend to be āproposal-firstā.[16] [...] I think that a useful reframing would be: āgiven risk X, it seems like we could best decorrelate this across different populations by pursuing proposals A, B, or Cā.
Something I felt unsure of here was: Do you think ārisk-firstā thinking would in general be more useful than āproposal-firstā thinking? Or is it more that you think both perspectives seem useful, and so far weāve pretty much only tried the latter perspective so we should add the former perspective to our toolkit?
FWIW, I agree with your arguments about some benefits of ārisk-firstā thinking and some downsides of āproposal-firstā thinking. But I think the following point warrants more emphasis than a footnote:
[17] Though itās worth noting the advantage of proposals that work in a wide variety of cases, given the possibility of unanticipated risks (H/āT Daniel Eth).
Reasons that might warrant more emphasis are:
Unanticipated risks might account for a substantial portion of total existential risk
This seems prima facie plausible
This also seems in line with the one (only one!) directly relevant existential risk estimate Iām aware of us having
Namely, Ord estimates that āunforeseen anthropogenic risksā have about as high a chance of causing an existential catastrophe over the coming century as engineered pandemics, with only unaligned AI having a higher chance of causing that, and things like nuclear war and climate change having a notably lower chance
Of course, none of those points are very strong evidence, but I think the evidence for that claim being false would be similarly weak
I think a key part of the appeal for a āmodularityā-focused approach, or in general approaches focused on something like āresilienceā or ārecoveryā rather than āpreventionā, is probably precisely that they might be better able to cover unforeseen existential risks than prevention-focused efforts can
Great postāthanks for writing it!
Something I felt unsure of here was: Do you think ārisk-firstā thinking would in general be more useful than āproposal-firstā thinking? Or is it more that you think both perspectives seem useful, and so far weāve pretty much only tried the latter perspective so we should add the former perspective to our toolkit?
FWIW, I agree with your arguments about some benefits of ārisk-firstā thinking and some downsides of āproposal-firstā thinking. But I think the following point warrants more emphasis than a footnote:
Reasons that might warrant more emphasis are:
Unanticipated risks might account for a substantial portion of total existential risk
This seems prima facie plausible
This also seems in line with the one (only one!) directly relevant existential risk estimate Iām aware of us having
Namely, Ord estimates that āunforeseen anthropogenic risksā have about as high a chance of causing an existential catastrophe over the coming century as engineered pandemics, with only unaligned AI having a higher chance of causing that, and things like nuclear war and climate change having a notably lower chance
I wrote some thoughts on this here
(Itās possible that there are other directly relevant estimates. If youāre aware of any, please comment to say so in my database!)
If I recall correctly, the following post also made good-seeming arguments for this view: The Importance of Unknown Existential Risks
Of course, none of those points are very strong evidence, but I think the evidence for that claim being false would be similarly weak
I think a key part of the appeal for a āmodularityā-focused approach, or in general approaches focused on something like āresilienceā or ārecoveryā rather than āpreventionā, is probably precisely that they might be better able to cover unforeseen existential risks than prevention-focused efforts can