Thanks for the comment Jeff! I admit that I didn’t have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/governance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF for example. I am not denying that this gives you some indication of the progress you are making, but it does become very difficult to precisely evaluate the impact of the work and make comparisons.
To establish a relationship between “How well can we identify existing pathogens in sequencing data?”, identifying future pandemics earlier, and reducing catastrophic/existential risk from pandemics, you have to make a significant number of assumptions/guesses which are far more difficult to get feedback on. To give a few examples:
- How likely is the next catastrophic pandemic to be from an existing pathogen? - How likely is it that marginal improvements to the identification process are going to counterfactually identify a catastrophic threat? - For the set of pathogens that could cause an existential/catastrophic threat, how much does early identification reduce the risk by? - How much is this risk reduction in absolute terms? (Or a different angle, assuming you have an answer to the previous question: What are the chances of an existential/catastrophic pandemic this century?)
These are the types of question that you need to address to actually draw a line to anything that cashes out to a number, and my uninformed guess is that there is substantial disagreement about the answers. So while you may get clear feedback on a particular sub question, it is very difficult to get feedback on how much this is actually pushing on the thing you care about. So while perhaps you can compare projects within a narrow subfield (eg. improving identification of existing pathogens), it is easy to then lose track of the bigger picture which is what really matters.
To be clear, I am not at all saying that this doesn’t make the work worth doing, it does just make me pessimistic about the utility of attempting to make precise quantifications.
Thanks for the comment Jeff! I admit that I didn’t have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/governance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF for example. I am not denying that this gives you some indication of the progress you are making, but it does become very difficult to precisely evaluate the impact of the work and make comparisons.
To establish a relationship between “How well can we identify existing pathogens in sequencing data?”, identifying future pandemics earlier, and reducing catastrophic/existential risk from pandemics, you have to make a significant number of assumptions/guesses which are far more difficult to get feedback on. To give a few examples:
- How likely is the next catastrophic pandemic to be from an existing pathogen?
- How likely is it that marginal improvements to the identification process are going to counterfactually identify a catastrophic threat?
- For the set of pathogens that could cause an existential/catastrophic threat, how much does early identification reduce the risk by?
- How much is this risk reduction in absolute terms? (Or a different angle, assuming you have an answer to the previous question: What are the chances of an existential/catastrophic pandemic this century?)
These are the types of question that you need to address to actually draw a line to anything that cashes out to a number, and my uninformed guess is that there is substantial disagreement about the answers. So while you may get clear feedback on a particular sub question, it is very difficult to get feedback on how much this is actually pushing on the thing you care about. So while perhaps you can compare projects within a narrow subfield (eg. improving identification of existing pathogens), it is easy to then lose track of the bigger picture which is what really matters.
To be clear, I am not at all saying that this doesn’t make the work worth doing, it does just make me pessimistic about the utility of attempting to make precise quantifications.