Thanks for writing this, I found it helpful for understanding the biosecurity space better!
I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.
I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which people in biosecurity are actually worried about, making it extremely difficult to evaluate. An example of this kind of thinking is David Thorstad’s post on overestimating risks from biohazards which I thought was somewhat disappointing epistemically:
https://ineffectivealtruismblog.com/2023/07/08/exaggerating-the-risks-part-9-biorisk-grounds-for-doubt/.
I suppose the options for managing this situation are:
Encourage deference to the field that biosecurity is worth working on relative to other EA areas.
Create some kind of resource which isn’t an infohazard in itself, but would be able to make a good case of biosecurity’s importance by perhaps gesturing at some credible threat models.
Permit the status quo which seems to probably lead to an underprioritisation of biosecurity.
2 seems best if it is at all feasible, but am unsure what to do between 1 and 3.
This is more a response to “it is easy to build an intuitive case for biohazards not being very important or an existential risk”, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn’t matter as much in the grand scale of advocacy? The set of philosophical assumptions under which “not an existential risk” can be rounded to “not very important” seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one’s focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
Deference doesn’t seem ideal, seems against the norms of the EA community
Like you say seems very feesable. I would be surprised if there wasn’t something like this already? And even you could make the point that the threat models used aren’t even the highest risk—others that you don’t talk about could be even worse.
Thanks for writing this, I found it helpful for understanding the biosecurity space better!
I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.
I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which people in biosecurity are actually worried about, making it extremely difficult to evaluate. An example of this kind of thinking is David Thorstad’s post on overestimating risks from biohazards which I thought was somewhat disappointing epistemically: https://ineffectivealtruismblog.com/2023/07/08/exaggerating-the-risks-part-9-biorisk-grounds-for-doubt/.
I suppose the options for managing this situation are:
Encourage deference to the field that biosecurity is worth working on relative to other EA areas.
Create some kind of resource which isn’t an infohazard in itself, but would be able to make a good case of biosecurity’s importance by perhaps gesturing at some credible threat models.
Permit the status quo which seems to probably lead to an underprioritisation of biosecurity.
2 seems best if it is at all feasible, but am unsure what to do between 1 and 3.
This is more a response to “it is easy to build an intuitive case for biohazards not being very important or an existential risk”, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn’t matter as much in the grand scale of advocacy? The set of philosophical assumptions under which “not an existential risk” can be rounded to “not very important” seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one’s focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
Nice comment, to respond to your options
Deference doesn’t seem ideal, seems against the norms of the EA community
Like you say seems very feesable. I would be surprised if there wasn’t something like this already? And even you could make the point that the threat models used aren’t even the highest risk—others that you don’t talk about could be even worse.
Obviously not ideal