One way in which existing work in this space strikes me as deficient is the absence of what you might call ‘risk-first’ thinking.[15] By ‘risk-first’ I mean an approach that begins with a relatively well-defined risk, or perhaps a category of risk, and proceeds to consider ways in which humanity might increase its degree of modularity with respect to that. [...] This contrasts with virtually all comments I’ve read relating to mitigation through modularity, in that these tend to be ‘proposal-first’.[16] [...] I think that a useful reframing would be: ‘given risk X, it seems like we could best decorrelate this across different populations by pursuing proposals A, B, or C’.
Something I felt unsure of here was: Do you think ‘risk-first’ thinking would in general be more useful than ‘proposal-first’ thinking? Or is it more that you think both perspectives seem useful, and so far we’ve pretty much only tried the latter perspective so we should add the former perspective to our toolkit?
FWIW, I agree with your arguments about some benefits of ‘risk-first’ thinking and some downsides of ‘proposal-first’ thinking. But I think the following point warrants more emphasis than a footnote:
[17] Though it’s worth noting the advantage of proposals that work in a wide variety of cases, given the possibility of unanticipated risks (H/T Daniel Eth).
Reasons that might warrant more emphasis are:
Unanticipated risks might account for a substantial portion of total existential risk
This seems prima facie plausible
This also seems in line with the one (only one!) directly relevant existential risk estimate I’m aware of us having
Namely, Ord estimates that “unforeseen anthropogenic risks” have about as high a chance of causing an existential catastrophe over the coming century as engineered pandemics, with only unaligned AI having a higher chance of causing that, and things like nuclear war and climate change having a notably lower chance
Of course, none of those points are very strong evidence, but I think the evidence for that claim being false would be similarly weak
I think a key part of the appeal for a “modularity”-focused approach, or in general approaches focused on something like “resilience” or “recovery” rather than “prevention”, is probably precisely that they might be better able to cover unforeseen existential risks than prevention-focused efforts can
Great post—thanks for writing it!
Something I felt unsure of here was: Do you think ‘risk-first’ thinking would in general be more useful than ‘proposal-first’ thinking? Or is it more that you think both perspectives seem useful, and so far we’ve pretty much only tried the latter perspective so we should add the former perspective to our toolkit?
FWIW, I agree with your arguments about some benefits of ‘risk-first’ thinking and some downsides of ‘proposal-first’ thinking. But I think the following point warrants more emphasis than a footnote:
Reasons that might warrant more emphasis are:
Unanticipated risks might account for a substantial portion of total existential risk
This seems prima facie plausible
This also seems in line with the one (only one!) directly relevant existential risk estimate I’m aware of us having
Namely, Ord estimates that “unforeseen anthropogenic risks” have about as high a chance of causing an existential catastrophe over the coming century as engineered pandemics, with only unaligned AI having a higher chance of causing that, and things like nuclear war and climate change having a notably lower chance
I wrote some thoughts on this here
(It’s possible that there are other directly relevant estimates. If you’re aware of any, please comment to say so in my database!)
If I recall correctly, the following post also made good-seeming arguments for this view: The Importance of Unknown Existential Risks
Of course, none of those points are very strong evidence, but I think the evidence for that claim being false would be similarly weak
I think a key part of the appeal for a “modularity”-focused approach, or in general approaches focused on something like “resilience” or “recovery” rather than “prevention”, is probably precisely that they might be better able to cover unforeseen existential risks than prevention-focused efforts can