I think the question of the the plausible range for tractability is an interesting one. I suspect that most global health interventions seriously considered by EA fall within a 100x range. But I would guess that the reason this is true is that the only interventions with enough evidence are already in the process of solving more than 0.5% of the problem. At the other end of the spectrum, I suspect intervention trying to influence the very long term trajectory of human culture might fall into a range that spans at least 6 orders of magnitude. There are probably plenty of interventions we could consider that we should expect to have much less than a one in a million chance of solving 10% of the problem. Because there is little evidence and feedback for what would work in this context, we should not expect most things we consider to have a non-tiny chance of working.
I am also a little skeptical of how much information we get out of neglectedness when working with these sorts of problems. I think something being neglected might often be a sign that experts in the space don’t consider the approach plausible, or that some experts have tried it and given up on it. If that is the case, then that effect may swamp the diminishing marginal returns we might expect. Additionally, diminishing marginal returns might not be as common in fields where it’s not obvious what the next good thing to do is (because there are poor feedback mechanisms).
Thanks so much for the kind words.
I think the question of the the plausible range for tractability is an interesting one. I suspect that most global health interventions seriously considered by EA fall within a 100x range. But I would guess that the reason this is true is that the only interventions with enough evidence are already in the process of solving more than 0.5% of the problem. At the other end of the spectrum, I suspect intervention trying to influence the very long term trajectory of human culture might fall into a range that spans at least 6 orders of magnitude. There are probably plenty of interventions we could consider that we should expect to have much less than a one in a million chance of solving 10% of the problem. Because there is little evidence and feedback for what would work in this context, we should not expect most things we consider to have a non-tiny chance of working.
I am also a little skeptical of how much information we get out of neglectedness when working with these sorts of problems. I think something being neglected might often be a sign that experts in the space don’t consider the approach plausible, or that some experts have tried it and given up on it. If that is the case, then that effect may swamp the diminishing marginal returns we might expect. Additionally, diminishing marginal returns might not be as common in fields where it’s not obvious what the next good thing to do is (because there are poor feedback mechanisms).