I think something that’s important here is that indirect arguments can show that given other approaches, you may come to different conclusions; not just on prioritisation of ‘risks’(I hate using that word!), but also on techniques to reduce those as well.
For instance, I still think that AI and Biorisk are extremely significant contributors to risk, but probably would take on pretty different approaches to how we deal with this based on trying to consider these more indirect criticisms of the methodologies etc used
Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostrom’s paper “The Vulnerable World Hypothesis” proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form “X is a more pressing problem then AI risk” or “here is a huge vulnerability X, we should try to fix that” then I would consider that an object-level argument, if you actually name X.
I think something that’s important here is that indirect arguments can show that given other approaches, you may come to different conclusions; not just on prioritisation of ‘risks’(I hate using that word!), but also on techniques to reduce those as well. For instance, I still think that AI and Biorisk are extremely significant contributors to risk, but probably would take on pretty different approaches to how we deal with this based on trying to consider these more indirect criticisms of the methodologies etc used
Exactly. For example, by looking at vulnerabilities in addition to hazards like AGI and engineered pandemics, we might find a vulnerability that is more pressing to work on than AI risk.
That said, the EA x-risk community has discussed vulnerabilities before: Bostrom’s paper “The Vulnerable World Hypothesis” proposes the semi-anarchic default condition as a societal vulnerability to a broad class of hazards.
To be clear, if you make arguments of the form “X is a more pressing problem then AI risk” or “here is a huge vulnerability X, we should try to fix that” then I would consider that an object-level argument, if you actually name X.