Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostrom’s paper “The Vulnerable World Hypothesis” proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form “X is a more pressing problem then AI risk” or “here is a huge vulnerability X, we should try to fix that” then I would consider that an object-level argument, if you actually name X.
Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostrom’s paper “The Vulnerable World Hypothesis” proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form “X is a more pressing problem then AI risk” or “here is a huge vulnerability X, we should try to fix that” then I would consider that an object-level argument, if you actually name X.