A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
As for the question of “what do the authors consider to be root causes,” here’s my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:
(1) There’s lots of demand for meat.
(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.
(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.
I suspect you and other EAs focus on item (2) when you are talking about “root causes.” In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn’t need to support them. They write:
Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about “values of the old system” in this quote:
As for the other quote you pulled out:
and the following discussion:
To be more concrete, I suspect what they’re talking about is something like the following. Consider a potential philanthropist like Jeff Bezos—they likely believe that Amazon has harmed the world through their business practices. Let’s say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:
(1) Donate $10 billion to worthy causes.
(2) Change Amazon’s business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.
My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon’s business practices are for the world.
---
Overall, though I agree with you that if my interpretation accurately describes the author’s viewpoint, the article does not do a good job arguing for that. But I’m not really sure about the relevance of your statement:
Do you think that the article reflects a viewpoint that it’s not possible to make decisions under uncertainty? I didn’t get that from the article; one of their main points is that it’s important to try things even if success is uncertain.