I agree large-scale catastrophic failures are an important consideration. Originally I thought all global catastrophic risks would be downstream of some reservation failure (i.e., EA didn’t do enough), but now I think this categorization unrealistically estimates EA’s capabilities at the moment (i.e., global catastrophic risks might occur despite the realistic ideal EA movement’s best efforts).
In some sense I think large scale catastrophic risks aren’t super action guiding because we’re victim to them despite our best efforts, which is why I didn’t include them. But now I counter my own point: Large-scale catastrophic risks could be action guiding in that they indicate the importance of thinking about things like EA coordination recovery post-catastrophe.
I’m now considering adding a fifth cluster of failure: uncaring universe failures. Failures in which EA becomes crippled from something like a global catastrophic risk despite our best efforts. (I could also call them ruthless universe failures if I really care about my Rs).
Another example of uncaring universe failures, given slow AGI timelines is if either a) the West loses Great Power conflicts or b) gets outcompeted by other powers AND c) EA does not manage to find much traction in other cultures.
(Note that you can also frame this as a diversity failure)
I agree large-scale catastrophic failures are an important consideration. Originally I thought all global catastrophic risks would be downstream of some reservation failure (i.e., EA didn’t do enough), but now I think this categorization unrealistically estimates EA’s capabilities at the moment (i.e., global catastrophic risks might occur despite the realistic ideal EA movement’s best efforts).
In some sense I think large scale catastrophic risks aren’t super action guiding because we’re victim to them despite our best efforts, which is why I didn’t include them. But now I counter my own point: Large-scale catastrophic risks could be action guiding in that they indicate the importance of thinking about things like EA coordination recovery post-catastrophe.
I’m now considering adding a fifth cluster of failure: uncaring universe failures. Failures in which EA becomes crippled from something like a global catastrophic risk despite our best efforts. (I could also call them ruthless universe failures if I really care about my Rs).
Agreement Upvote: Yeah do that
Disagreement Downvote: Nah
Another example of uncaring universe failures, given slow AGI timelines is if either a) the West loses Great Power conflicts or b) gets outcompeted by other powers AND c) EA does not manage to find much traction in other cultures.
(Note that you can also frame this as a diversity failure)