A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.
A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.