I think I’d push back somewhat although my wording was definitely sloppy.
I think it’s worth establishing my frame here because I reckon I’m not taking neglectedness in a more conventional sense along the lines of “how much biorisk reduction is on the plate?”. I generally think it’s quite hard to make judgements about neglectedness in this way in bio for two main reasons: firstly, many interventions in bio are only applicable to a particular subset of threat models and pathogen characteristics and can be hugely sensitive to geographic/local context amongst other contingencies. Secondly, there are no great models (I could find!) of the distribution of threats by threat models and pathogen characteristics. So when I’m talking about neglectedness, I think I mean something more like “how many plausible combinations of threat models, pathogen characteristics, and other contingencies are being missed”.
“My view is that many players and funding sources means that fewer important funding opportunities will be missed”
So I think this could turn out to be right empirically, but it’s not trivially true in this instance if most funders centre on a narrow subset (e.g. naturally emergent pandemics; respiratory transmission; flu-like illness); EAs focus on quite specific scenarios (e.g. genetically-engineered pandemics; respiratory transmission; high case-fatality rates), but then this leaves a number of possibilities that could contribute towards reducing threats from GCBRs that other funders could be interested in. For example, smallpox; antimicrobial resistant strains of various diseases; or even genetically-engineered diseases that might not directly be GCBRs. I think a key assumption here is that work on these can be doubly relevant or have spillover effects even for models that are more GCBR-specific. Hence why, I conclude that many opportunities “could” be missed: the failure mode looks like a bioinformatics company working on the attribution of genetically engineered pathogens and neglecting funding from the much more well-funded antimicrobial resistance landscape, even if there’s a lot of overlap in methods and the extra resources could drive forward work on both.
“I was struck by how little philanthropy has been directed towards tech development for biosecurity, mitigating GCBRs, and policy advocacy for a range of topics from regulating dual-use research of concern (DURC) to mitigating risks from bioweapons.”
This is definitely poor wording, poor grammar, and an important omission on my part, haha. What I want to stress though is “tech development for a range of topics” / “mitigating GCBRs for a range of topics”, and by “for a range of topics” I want to refer to a particular subset of misuse-based concerns that vaccine R&D, health system readiness, and pathogenesis research are less applicable to. A naive example of this would just be “wildfire” cases with unforeseen transmissibility and case fatality such that countermeasures or general health systems strengthening would probably be less effective here than focusing on prevention.
My surprise ultimately comes from the fact that I think both in EA and outside EA—admittedly noting I don’t have lots of experience in either—people do internalise the sheer heterogeneity here. I don’t think levels of funding/concern have ever really well-tracked what threats we should be most concerned about. But in turn, I guess I was taken aback to still see these gaps (and hopefully opportunities!) on both ends.
Appreciate the kind words!
I think I’d push back somewhat although my wording was definitely sloppy.
I think it’s worth establishing my frame here because I reckon I’m not taking neglectedness in a more conventional sense along the lines of “how much biorisk reduction is on the plate?”. I generally think it’s quite hard to make judgements about neglectedness in this way in bio for two main reasons: firstly, many interventions in bio are only applicable to a particular subset of threat models and pathogen characteristics and can be hugely sensitive to geographic/local context amongst other contingencies. Secondly, there are no great models (I could find!) of the distribution of threats by threat models and pathogen characteristics. So when I’m talking about neglectedness, I think I mean something more like “how many plausible combinations of threat models, pathogen characteristics, and other contingencies are being missed”.
“My view is that many players and funding sources means that fewer important funding opportunities will be missed”
So I think this could turn out to be right empirically, but it’s not trivially true in this instance if most funders centre on a narrow subset (e.g. naturally emergent pandemics; respiratory transmission; flu-like illness); EAs focus on quite specific scenarios (e.g. genetically-engineered pandemics; respiratory transmission; high case-fatality rates), but then this leaves a number of possibilities that could contribute towards reducing threats from GCBRs that other funders could be interested in. For example, smallpox; antimicrobial resistant strains of various diseases; or even genetically-engineered diseases that might not directly be GCBRs. I think a key assumption here is that work on these can be doubly relevant or have spillover effects even for models that are more GCBR-specific. Hence why, I conclude that many opportunities “could” be missed: the failure mode looks like a bioinformatics company working on the attribution of genetically engineered pathogens and neglecting funding from the much more well-funded antimicrobial resistance landscape, even if there’s a lot of overlap in methods and the extra resources could drive forward work on both.
“I was struck by how little philanthropy has been directed towards tech development for biosecurity, mitigating GCBRs, and policy advocacy for a range of topics from regulating dual-use research of concern (DURC) to mitigating risks from bioweapons.”
This is definitely poor wording, poor grammar, and an important omission on my part, haha. What I want to stress though is “tech development for a range of topics” / “mitigating GCBRs for a range of topics”, and by “for a range of topics” I want to refer to a particular subset of misuse-based concerns that vaccine R&D, health system readiness, and pathogenesis research are less applicable to. A naive example of this would just be “wildfire” cases with unforeseen transmissibility and case fatality such that countermeasures or general health systems strengthening would probably be less effective here than focusing on prevention.
My surprise ultimately comes from the fact that I think both in EA and outside EA—admittedly noting I don’t have lots of experience in either—people do internalise the sheer heterogeneity here. I don’t think levels of funding/concern have ever really well-tracked what threats we should be most concerned about. But in turn, I guess I was taken aback to still see these gaps (and hopefully opportunities!) on both ends.