This is more a response to “it is easy to build an intuitive case for biohazards not being very important or an existential risk”, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn’t matter as much in the grand scale of advocacy? The set of philosophical assumptions under which “not an existential risk” can be rounded to “not very important” seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one’s focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
This is more a response to “it is easy to build an intuitive case for biohazards not being very important or an existential risk”, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn’t matter as much in the grand scale of advocacy? The set of philosophical assumptions under which “not an existential risk” can be rounded to “not very important” seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one’s focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.