Thanks for making this case, and for directly putting your idea in a concrete model. I share the intuition that humanity (unfortunately) relies way too much on recent and very compelling experience to prioritise problems.
Some thoughts:
1) Catastrophes as risk factors: humanity will be weakened by a catastrophe and less able to respond to a potential x-risks for some time
2) In many cases we don’t need the whole of humanity to realise the need for action (like almost everyone does with the current pandemic), but instead convincing small groups of experts is enough (and they can be convinced based on arguments)
3) Investments in field building and “practice” catastrophes might be very valuable for a cause like pandemic preparedness to get off the ground, and be worth the lack of buy-in of bigger parts of humanity
4) You may expect that, even without global catastrophes, humanity as a whole will come to terms with the prospect of x-risks in the coming decades. It might then not be worth it to accept a slight risk of fatally underestimating an unlikely x-risk.
Thanks for the comment. I think that 2 and 4 are good points, and 1 and 3 are great ones. In particular, I think that one of the more important factors that the toy model doesn’t capture is the nonindependence of the arrival of disasters. Disasters that are large enough are likely to have a destabilizing affect which breeds other disasters. An example might be that WWII was in large part a direct consequence of WWI and a global depression. I agree that all of these should also be part of a more complete model.
Thanks for making this case, and for directly putting your idea in a concrete model. I share the intuition that humanity (unfortunately) relies way too much on recent and very compelling experience to prioritise problems.
Some thoughts:
1) Catastrophes as risk factors: humanity will be weakened by a catastrophe and less able to respond to a potential x-risks for some time
2) In many cases we don’t need the whole of humanity to realise the need for action (like almost everyone does with the current pandemic), but instead convincing small groups of experts is enough (and they can be convinced based on arguments)
3) Investments in field building and “practice” catastrophes might be very valuable for a cause like pandemic preparedness to get off the ground, and be worth the lack of buy-in of bigger parts of humanity
4) You may expect that, even without global catastrophes, humanity as a whole will come to terms with the prospect of x-risks in the coming decades. It might then not be worth it to accept a slight risk of fatally underestimating an unlikely x-risk.
Thanks for the comment. I think that 2 and 4 are good points, and 1 and 3 are great ones. In particular, I think that one of the more important factors that the toy model doesn’t capture is the nonindependence of the arrival of disasters. Disasters that are large enough are likely to have a destabilizing affect which breeds other disasters. An example might be that WWII was in large part a direct consequence of WWI and a global depression. I agree that all of these should also be part of a more complete model.