This seems much too strong. Sure, âsuccessfully avert human extinctionâ doesnât work as a feedback loop, but projects have earlier steps. And the areas in which I expect technical work on existential risk reduction to be most successful are ones where those loops are solid, and are well connected to reducing risk.
For example, I work in biosecurity at the NAO, with an overall goal of some thing like âidentify future pandemics earlierâ. Some concrete questions that would give good feedback loops:
If some fraction of people have a given virus, how much do we expect to see in various kinds of sequencing data?
How well can we identify existing pathogens in sequencing data?
Can we identify novel pathogens? If we donât use pathogen specific data can we successfully re-identify known pathogens?
What are the best methods for preparing samples for sequencing to get a high concentration of human viruses relative to other things?
Similarly, consider the kinds of questions Max discusses in his recent far-UVC post.
Thanks for the comment Jeff! I admit that I didnât have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/âgovernance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF for example. I am not denying that this gives you some indication of the progress you are making, but it does become very difficult to precisely evaluate the impact of the work and make comparisons.
To establish a relationship between âHow well can we identify existing pathogens in sequencing data?â, identifying future pandemics earlier, and reducing catastrophic/âexistential risk from pandemics, you have to make a significant number of assumptions/âguesses which are far more difficult to get feedback on. To give a few examples:
- How likely is the next catastrophic pandemic to be from an existing pathogen? - How likely is it that marginal improvements to the identification process are going to counterfactually identify a catastrophic threat? - For the set of pathogens that could cause an existential/âcatastrophic threat, how much does early identification reduce the risk by? - How much is this risk reduction in absolute terms? (Or a different angle, assuming you have an answer to the previous question: What are the chances of an existential/âcatastrophic pandemic this century?)
These are the types of question that you need to address to actually draw a line to anything that cashes out to a number, and my uninformed guess is that there is substantial disagreement about the answers. So while you may get clear feedback on a particular sub question, it is very difficult to get feedback on how much this is actually pushing on the thing you care about. So while perhaps you can compare projects within a narrow subfield (eg. improving identification of existing pathogens), it is easy to then lose track of the bigger picture which is what really matters.
To be clear, I am not at all saying that this doesnât make the work worth doing, it does just make me pessimistic about the utility of attempting to make precise quantifications.
This seems much too strong. Sure, âsuccessfully avert human extinctionâ doesnât work as a feedback loop, but projects have earlier steps. And the areas in which I expect technical work on existential risk reduction to be most successful are ones where those loops are solid, and are well connected to reducing risk.
For example, I work in biosecurity at the NAO, with an overall goal of some thing like âidentify future pandemics earlierâ. Some concrete questions that would give good feedback loops:
If some fraction of people have a given virus, how much do we expect to see in various kinds of sequencing data?
How well can we identify existing pathogens in sequencing data?
Can we identify novel pathogens? If we donât use pathogen specific data can we successfully re-identify known pathogens?
What are the best methods for preparing samples for sequencing to get a high concentration of human viruses relative to other things?
Similarly, consider the kinds of questions Max discusses in his recent far-UVC post.
Thanks for the comment Jeff! I admit that I didnât have biosecurity consciously in mind where I think perhaps you have an unusually clear paradigm compared to other longtermist work (eg. AI alignment/âgovernance, space governance etc), and my statement was likely too strong besides.
However, I think there is a clear difference between what you describe and the types of feedback in eg. global health. In your case, you are acting with multiple layers of proxies for what you care about, which is very different to measuring the number of lives saved by AMF for example. I am not denying that this gives you some indication of the progress you are making, but it does become very difficult to precisely evaluate the impact of the work and make comparisons.
To establish a relationship between âHow well can we identify existing pathogens in sequencing data?â, identifying future pandemics earlier, and reducing catastrophic/âexistential risk from pandemics, you have to make a significant number of assumptions/âguesses which are far more difficult to get feedback on. To give a few examples:
- How likely is the next catastrophic pandemic to be from an existing pathogen?
- How likely is it that marginal improvements to the identification process are going to counterfactually identify a catastrophic threat?
- For the set of pathogens that could cause an existential/âcatastrophic threat, how much does early identification reduce the risk by?
- How much is this risk reduction in absolute terms? (Or a different angle, assuming you have an answer to the previous question: What are the chances of an existential/âcatastrophic pandemic this century?)
These are the types of question that you need to address to actually draw a line to anything that cashes out to a number, and my uninformed guess is that there is substantial disagreement about the answers. So while you may get clear feedback on a particular sub question, it is very difficult to get feedback on how much this is actually pushing on the thing you care about. So while perhaps you can compare projects within a narrow subfield (eg. improving identification of existing pathogens), it is easy to then lose track of the bigger picture which is what really matters.
To be clear, I am not at all saying that this doesnât make the work worth doing, it does just make me pessimistic about the utility of attempting to make precise quantifications.