Do you have any thoughts on how to think about indirect paths to impact and their relative fragility, relying heavily on subjective value estimates?
Specifically in your/our[1] case, the pathway from GCR research to preventing deaths is a relatively convoluted one—you raise awareness or get funding for policy activism which then convinces policymakers to act differently, which in turn reduces risk. This seems quite reasonable, but has enough steps that it’s both very hard to evaluate impact in ways that can justify the investments other than via prior beliefs about impact, and has lots of failure points, with all the pitfalls of the ways complex plans usually fail.
I do work on similar problems, so any implicit criticism here applies to myself as well—but this is something that seems important for EAs to think about clearly, and which seems to get little attention in practice. (Of course, I have tentative thoughts, but I want to hear other people’s perspectives.)
Definitely difficult. I think my colleagues’ work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on “impact multipliers” to make ever-more-rigorous comparative judgments is the most promising path forward. I’m not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?
Do you have any thoughts on how to think about indirect paths to impact and their relative fragility, relying heavily on subjective value estimates?
Specifically in your/our[1] case, the pathway from GCR research to preventing deaths is a relatively convoluted one—you raise awareness or get funding for policy activism which then convinces policymakers to act differently, which in turn reduces risk. This seems quite reasonable, but has enough steps that it’s both very hard to evaluate impact in ways that can justify the investments other than via prior beliefs about impact, and has lots of failure points, with all the pitfalls of the ways complex plans usually fail.
I do work on similar problems, so any implicit criticism here applies to myself as well—but this is something that seems important for EAs to think about clearly, and which seems to get little attention in practice. (Of course, I have tentative thoughts, but I want to hear other people’s perspectives.)
Definitely difficult. I think my colleagues’ work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on “impact multipliers” to make ever-more-rigorous comparative judgments is the most promising path forward. I’m not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?