Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostromās paper āThe Vulnerable World Hypothesisā proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form āX is a more pressing problem then AI riskā or āhere is a huge vulnerability X, we should try to fix thatā then I would consider that an object-level argument, if you actually name X.
Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostromās paper āThe Vulnerable World Hypothesisā proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form āX is a more pressing problem then AI riskā or āhere is a huge vulnerability X, we should try to fix thatā then I would consider that an object-level argument, if you actually name X.