The basic premise of this post: It’s better to solve 0.00001% of a $7 billion problem than to solve 100% of a $500 problem. (One could quibble with various oversimplifications that this formulation makes for the sake of pithiness, but the basic point is uncontroversial within EA.)
The key question: If this point is both true and obvious, why do so many people outside EA not buy it, and why do so many people within EA harbor inner doubts or feelings of failure when acting in accordance with it?
We should ask ourselves this not only to better inspire and motivate each other, or to better persuade outsiders, but also because it’s possible that this phenomenon is a hint that we’ve missed some important consideration.
I think the point about Aristides de Sousa Mendes is a bit of a red herring.
It seems like more-or-less a historical accident that Sousa Mendes is more obscure than, e.g., Oskar Schindler. Even so, he’s fairly well-known, and has pretty definitively gone down in history as a great hero. I don’t think “but he only solved 0.1% of a six-million-life problem” is an objection that anyone actually has. Saving 10,000 lives impresses people, and it doesn’t seem to impress them less just because another six million people were still going on dying in the background.
(The main counterargument that I can think of is the findings of the heuristics-and-biases literature on scope neglect, e.g., Daniel Kahneman’s experiment asking people to donate to save oil-soaked birds. I think that this kind of situation is a little different; here, you’re not appealing to something that people already care about, you’re producing a new problem out of thin air and asking people to quickly figure out how to fit it into their existing priorities. I think it makes sense that this setup doesn’t elicit careful thought about prioritization, since that’s hard, and instead people fall back on scope-insensitive heuristics. But this is a very rough argument and possibly there’s more literature here that I should be reading.)
When people are skeptical, either vocally or internally in the backs of their minds, of the efficacy of donating $500 to the Rapid Response Fund, I don’t think it’s because they think the effects will be analogous to what Sousa Mendes did, but that that’s not good enough. I think it’s because they suspect that the effects won’t be analogous to what Sousa Mendes did.
1% of a small number isn’t worth it! 1% of a big number is very worth it, especially if that big number is a number of lives!
A few caveats. First, a small number only matters if it’s real. It’s very easy to get spurious small effects, so much so that any time you see a small effect you should wonder if it’s real.
I think people are worried about something like this, and I think it’s not unreasonable for them to worry.
I once observed an argument on a work mailing list with someone who was skeptical of EA. The core of this person’s disagreement with us is that they think we’ve underestimated the insidiousness of the winner’s curse. From this perspective, GiveWell’s top charity selection process doesn’t identify the best interventions—it identifies the interventions whose proponents are most willing to engage in p-hacking. Therefore, you should instead support local charities that you have personally volunteered for and whose beneficiaries you have personally met—not because of some moral-philosophical idea of greater obligations to people near you, but because this is the only kind of charity that you can know is doing any good at all.
GiveWell in particular is careful enough that I don’t worry too much that they’ve fallen into this trap. But ACE, in its earlier years, infamously recommended interventions whose efficacy turned out to have been massively overestimated. I suspect that this is also true of some interventions that are still getting significant attention and resources from within EA, even if I can’t confidently say which ones.
And then of course there’s just that big problems are complicated and the argument for why any particular intervention is effective typically has a lot of steps and/or requires you to trust various institutions whose inner workings you don’t understand that well. All this adds up to a sense that small donations to solve a big problem wind up just throwing money into a black hole, with no one really helped.
This, I think, is the real challenge that EA needs to overcome: not the small size of our best efforts relative to the scope of the problem, but skepticism, implicit or explicit, loud or quiet, justified or unjustified, that our best efforts produce real results at all.
FWIW I think that GiveWell selects organisations are based on close to the best evidence base we have, kind of the opposite of “p-hacking”. It doesn’t make sense to me that anyone can be sure their local charity is doing “any good at all” without knowing the counterfactual.
My classic example to illustrate this is the original microloans in Bangladesh. Everyone could “see” how much they were helping as most of the women loaned money were growing successful businesses. Until they looked at cohorts of women who didn’t get loans and maybe of them were also running successful businesses as well. The loans were helping counterfactual but only in a minor way—most of the women would have done great anyway without the loan.
I think with situations like ACE charities that ended up not being useful it’s less of a p-hacking problem and more of an uncertainty problem. They just don’t have the same rigo to of us evidence base for efficacy as global health interventions so there are likely to be more failures and that’s hard to avoid.
The basic premise of this post: It’s better to solve 0.00001% of a $7 billion problem than to solve 100% of a $500 problem. (One could quibble with various oversimplifications that this formulation makes for the sake of pithiness, but the basic point is uncontroversial within EA.)
The key question: If this point is both true and obvious, why do so many people outside EA not buy it, and why do so many people within EA harbor inner doubts or feelings of failure when acting in accordance with it?
We should ask ourselves this not only to better inspire and motivate each other, or to better persuade outsiders, but also because it’s possible that this phenomenon is a hint that we’ve missed some important consideration.
I think the point about Aristides de Sousa Mendes is a bit of a red herring.
It seems like more-or-less a historical accident that Sousa Mendes is more obscure than, e.g., Oskar Schindler. Even so, he’s fairly well-known, and has pretty definitively gone down in history as a great hero. I don’t think “but he only solved 0.1% of a six-million-life problem” is an objection that anyone actually has. Saving 10,000 lives impresses people, and it doesn’t seem to impress them less just because another six million people were still going on dying in the background.
(The main counterargument that I can think of is the findings of the heuristics-and-biases literature on scope neglect, e.g., Daniel Kahneman’s experiment asking people to donate to save oil-soaked birds. I think that this kind of situation is a little different; here, you’re not appealing to something that people already care about, you’re producing a new problem out of thin air and asking people to quickly figure out how to fit it into their existing priorities. I think it makes sense that this setup doesn’t elicit careful thought about prioritization, since that’s hard, and instead people fall back on scope-insensitive heuristics. But this is a very rough argument and possibly there’s more literature here that I should be reading.)
When people are skeptical, either vocally or internally in the backs of their minds, of the efficacy of donating $500 to the Rapid Response Fund, I don’t think it’s because they think the effects will be analogous to what Sousa Mendes did, but that that’s not good enough. I think it’s because they suspect that the effects won’t be analogous to what Sousa Mendes did.
In a post about a different topic (behavioral economics), Scott Alexander writes:
I think people are worried about something like this, and I think it’s not unreasonable for them to worry.
I once observed an argument on a work mailing list with someone who was skeptical of EA. The core of this person’s disagreement with us is that they think we’ve underestimated the insidiousness of the winner’s curse. From this perspective, GiveWell’s top charity selection process doesn’t identify the best interventions—it identifies the interventions whose proponents are most willing to engage in p-hacking. Therefore, you should instead support local charities that you have personally volunteered for and whose beneficiaries you have personally met—not because of some moral-philosophical idea of greater obligations to people near you, but because this is the only kind of charity that you can know is doing any good at all.
GiveWell in particular is careful enough that I don’t worry too much that they’ve fallen into this trap. But ACE, in its earlier years, infamously recommended interventions whose efficacy turned out to have been massively overestimated. I suspect that this is also true of some interventions that are still getting significant attention and resources from within EA, even if I can’t confidently say which ones.
And then of course there’s just that big problems are complicated and the argument for why any particular intervention is effective typically has a lot of steps and/or requires you to trust various institutions whose inner workings you don’t understand that well. All this adds up to a sense that small donations to solve a big problem wind up just throwing money into a black hole, with no one really helped.
This, I think, is the real challenge that EA needs to overcome: not the small size of our best efforts relative to the scope of the problem, but skepticism, implicit or explicit, loud or quiet, justified or unjustified, that our best efforts produce real results at all.
FWIW I think that GiveWell selects organisations are based on close to the best evidence base we have, kind of the opposite of “p-hacking”. It doesn’t make sense to me that anyone can be sure their local charity is doing “any good at all” without knowing the counterfactual.
My classic example to illustrate this is the original microloans in Bangladesh. Everyone could “see” how much they were helping as most of the women loaned money were growing successful businesses. Until they looked at cohorts of women who didn’t get loans and maybe of them were also running successful businesses as well. The loans were helping counterfactual but only in a minor way—most of the women would have done great anyway without the loan.
I think with situations like ACE charities that ended up not being useful it’s less of a p-hacking problem and more of an uncertainty problem. They just don’t have the same rigo to of us evidence base for efficacy as global health interventions so there are likely to be more failures and that’s hard to avoid.