Yeah, great question! Lots of these categories were things I thought but ultimately had difficulties getting good estimates, so I don’t have good answers here. But I can say a little bit more about what my impressions were for each.
1. AI-misuse is tough because I think lots of the work here is bucketed (usually implicitly) into AI safety spending which I wasn’t looking at. Although I will say I struggled to find work at least explicitly focused on AI-bio that wasn’t EA (usually OP) funded (e.g. RAND, CLTR). I think in turn, I capture a lot of this in my “GCBR Priority Areas” bucket. So at least as far as work that identifies as attempting to tackle this problem it’s some % of this bucket (i.e. probably in the $1m-$10m order of magnitude, could be a fair bit less), but I don’t think this reflects how much total is going towards biorisk from AI, which is much harder to get data on.
2. Yeah synthesis screening I definitely implicitly bucket into my “GCBR Priority Areas” category. I didn’t attempt to break these down further because it’d be so much more work, though here’s some thoughts:
Synthesis screening is hard to get data on because I couldn’t find out how the International Gene Synthesis Consortium is funded, and I think historically this represents most of the biosecurity and pandemic prevention work here. My best guess (but like barely more than 50% confidence) is that the firms who form it directly bear the costs. If true, then the work I could find outside this space that is philanthropically funded is NTI | bio / IBBIS and SecureDNA/MIT Media Lab. NTI spent ~$4.8m on their bio programs and has received OP funding. MIT Media Lab have received a number of OP grants, and SecureDNA list OP as their only philanthropic collaborator. This means the spend per year is probably in the $1m-$10m order of magnitude, most of which comes from EA. Though yes the IGSC remains a big uncertainty of mine.
2. I think breaking down disease surveillance into pathogen-agnostic early detection, ‘broad-spectrum’, pathogen-specific, and GCBR-specific work is pretty tough, mostly because lots of the funded work is the same across these (e.g. improving epidemiological modelling capabilities; improving bioinformatics databases; developing sequencing technology) - and there is a lot of work on all of the above. Certainly, when it comes to funders or projects identifying as being focused on GCBRs (including the ‘stealth’ vs ′ wildfire’ terminology), I could not find anything that wasn’t affiliated with EA at all, which places an upper-bound at like 4-5% of the early detection spend. But for the reasons I’ve stated I think this is a very poor estimate of how much money actually contributes towards GCBRs and have no good numbers.
3. Outside of government funding/government-funded sources like univerisites, I could find no non-EA funding on P4E/resilience work. My impression is that EA represents most of the far-UVC work and thinking about pandemic-proof PPE (given that there’s quite clearly attributable work to EA-aligned/EA-funded orgs like SecureBio, Gryphon, and Amodo and little work outside). But I think this is much shakier when it comes to resilience and backup plans, which would come under a lot more general resilience work. That I’m just way less sure.
4. Medical countermeasures are very similar to disease surveillance—the “therapeutics” category ideally captures these (including the development of platform technologies at places like CEPI), but delineating between GCBR-oriented countermeasures was both pretty difficult and, I think, effectively unhelpful. Lots of pathogen-agnostic work here isn’t even done for pandemic-related reasons (e.g. building tools against neglected tropical diseases). Work that identifies as being focused on GCBRs, however, is essentially EA. So, whilst I think we can apply the same heuristic as for GCBR-specific early detection (at most 4-5% of the funding here), I’d be even less confident about these estimates representing the actual contribution towards GCBRs.
Yeah, great question! Lots of these categories were things I thought but ultimately had difficulties getting good estimates, so I don’t have good answers here. But I can say a little bit more about what my impressions were for each.
1. AI-misuse is tough because I think lots of the work here is bucketed (usually implicitly) into AI safety spending which I wasn’t looking at. Although I will say I struggled to find work at least explicitly focused on AI-bio that wasn’t EA (usually OP) funded (e.g. RAND, CLTR). I think in turn, I capture a lot of this in my “GCBR Priority Areas” bucket. So at least as far as work that identifies as attempting to tackle this problem it’s some % of this bucket (i.e. probably in the $1m-$10m order of magnitude, could be a fair bit less), but I don’t think this reflects how much total is going towards biorisk from AI, which is much harder to get data on.
2. Yeah synthesis screening I definitely implicitly bucket into my “GCBR Priority Areas” category. I didn’t attempt to break these down further because it’d be so much more work, though here’s some thoughts:
Synthesis screening is hard to get data on because I couldn’t find out how the International Gene Synthesis Consortium is funded, and I think historically this represents most of the biosecurity and pandemic prevention work here. My best guess (but like barely more than 50% confidence) is that the firms who form it directly bear the costs. If true, then the work I could find outside this space that is philanthropically funded is NTI | bio / IBBIS and SecureDNA/MIT Media Lab. NTI spent ~$4.8m on their bio programs and has received OP funding. MIT Media Lab have received a number of OP grants, and SecureDNA list OP as their only philanthropic collaborator. This means the spend per year is probably in the $1m-$10m order of magnitude, most of which comes from EA. Though yes the IGSC remains a big uncertainty of mine.
2. I think breaking down disease surveillance into pathogen-agnostic early detection, ‘broad-spectrum’, pathogen-specific, and GCBR-specific work is pretty tough, mostly because lots of the funded work is the same across these (e.g. improving epidemiological modelling capabilities; improving bioinformatics databases; developing sequencing technology) - and there is a lot of work on all of the above. Certainly, when it comes to funders or projects identifying as being focused on GCBRs (including the ‘stealth’ vs ′ wildfire’ terminology), I could not find anything that wasn’t affiliated with EA at all, which places an upper-bound at like 4-5% of the early detection spend. But for the reasons I’ve stated I think this is a very poor estimate of how much money actually contributes towards GCBRs and have no good numbers.
3. Outside of government funding/government-funded sources like univerisites, I could find no non-EA funding on P4E/resilience work. My impression is that EA represents most of the far-UVC work and thinking about pandemic-proof PPE (given that there’s quite clearly attributable work to EA-aligned/EA-funded orgs like SecureBio, Gryphon, and Amodo and little work outside). But I think this is much shakier when it comes to resilience and backup plans, which would come under a lot more general resilience work. That I’m just way less sure.
4. Medical countermeasures are very similar to disease surveillance—the “therapeutics” category ideally captures these (including the development of platform technologies at places like CEPI), but delineating between GCBR-oriented countermeasures was both pretty difficult and, I think, effectively unhelpful. Lots of pathogen-agnostic work here isn’t even done for pandemic-related reasons (e.g. building tools against neglected tropical diseases). Work that identifies as being focused on GCBRs, however, is essentially EA. So, whilst I think we can apply the same heuristic as for GCBR-specific early detection (at most 4-5% of the funding here), I’d be even less confident about these estimates representing the actual contribution towards GCBRs.
Hopefully this is useful!