Thanks for writing this, I think this is all useful stuff.
Another answer to the Exclusive Focus on Interventions That are Easy to Study critique is to grant that EAs may on the whole be falling victim to the streetlight effect, but that there is nothing in principle about EA that means that one should prefer highly measurable interventions and ignore speculative ones. So even if Pascal-Emmanuel Gobry sincerely believes that effective altruists, thus far, have in fact ignored interventions that we should expect to be very high impact (tackling corruption), this could in fact easily be accepted as highly effective by the EA metric and he could and should make an argument to that effect, even if the argument is highly speculative.
One beneficial thing about this response is that it accepts that EAs may tend towards the streetlight effect in some cases (and I think it’s plausible that we sometimes do, which is compatible with us sometimes having the opposite tendency: being too credulous about speculative causes). Another is that we don’t need to convince our critics that we don’t fall victim to the streetlight effect for this response to work- we shift the onus back onto them to make the case for why their preferred scheme (Catholic work to reduce corruption, for example) should be expected to be so effective. Of course there are downsides to being too concessive to our opponents.
“according to a 2014 survey, about 71% of effective altruists were interested in supporting poverty-related causes, but almost 78% were interested in global catastrophic risk reduction (including AI risk)”
To be fair, we should note that we can’t say on the strength of our survey that XX% of effective altruists at large (outside of our sample) were interested in poverty, AI etc. For all we know there could be twice as many AI risk EAs that we didn’t survey. But I think that the finding that more than 300 EAs expressed support for AI, x-risk and environmentalism and almost as many for politics does succeed in supporting your point that EAs are very open to speculative causes and don’t simply ignore anything that isn’t easily measurable health interventions.
It seems likely that some EAs have fallen prey to the streetlight effect, but I don’t see anything of the sort in the general EA population, and rather a slight bias in the opposite direction, but that may be my own risk aversion. What might look like the streetlight effect is that unproven high risk–high return interventions are countless, and only very few of them will be cost-effective at all, and an even tinier portion will have superior cost-effectiveness, so that when it comes to donating as opposed to research, (1) EAs don’t know which of countless high risk–high return interventions to support and so “make do” with streetlight ones, and (2) the capital they do invest into their favorite high risk–high return interventions is spread thinly in the statistics because there is little consensus about them. But that only holds in regard to donating, not prioritization research or EA startup ideas.
I put “make do” in quotation marks because I actually put a lot of hope into the flow-through effects of deworming, bed nets, cash transfers, etc. for empowering the poor.
Thanks! I’ll add a note about the LW oversampling.
Update: There are a lot of people with a quantitative background in the movement (well, people like me), so they’ll probably have more fun studying interventions that are more cleanly quantifiable. But I think GiveWell, ACE, et al. do a good job of warning against exaggerated faith in individual cost-effectiveness estimates and probably don’t fall prey to it themselves.
Thanks for writing this, I think this is all useful stuff.
Another answer to the Exclusive Focus on Interventions That are Easy to Study critique is to grant that EAs may on the whole be falling victim to the streetlight effect, but that there is nothing in principle about EA that means that one should prefer highly measurable interventions and ignore speculative ones. So even if Pascal-Emmanuel Gobry sincerely believes that effective altruists, thus far, have in fact ignored interventions that we should expect to be very high impact (tackling corruption), this could in fact easily be accepted as highly effective by the EA metric and he could and should make an argument to that effect, even if the argument is highly speculative.
One beneficial thing about this response is that it accepts that EAs may tend towards the streetlight effect in some cases (and I think it’s plausible that we sometimes do, which is compatible with us sometimes having the opposite tendency: being too credulous about speculative causes). Another is that we don’t need to convince our critics that we don’t fall victim to the streetlight effect for this response to work- we shift the onus back onto them to make the case for why their preferred scheme (Catholic work to reduce corruption, for example) should be expected to be so effective. Of course there are downsides to being too concessive to our opponents.
“according to a 2014 survey, about 71% of effective altruists were interested in supporting poverty-related causes, but almost 78% were interested in global catastrophic risk reduction (including AI risk)”
To be fair, we should note that we can’t say on the strength of our survey that XX% of effective altruists at large (outside of our sample) were interested in poverty, AI etc. For all we know there could be twice as many AI risk EAs that we didn’t survey. But I think that the finding that more than 300 EAs expressed support for AI, x-risk and environmentalism and almost as many for politics does succeed in supporting your point that EAs are very open to speculative causes and don’t simply ignore anything that isn’t easily measurable health interventions.
It seems likely that some EAs have fallen prey to the streetlight effect, but I don’t see anything of the sort in the general EA population, and rather a slight bias in the opposite direction, but that may be my own risk aversion. What might look like the streetlight effect is that unproven high risk–high return interventions are countless, and only very few of them will be cost-effective at all, and an even tinier portion will have superior cost-effectiveness, so that when it comes to donating as opposed to research, (1) EAs don’t know which of countless high risk–high return interventions to support and so “make do” with streetlight ones, and (2) the capital they do invest into their favorite high risk–high return interventions is spread thinly in the statistics because there is little consensus about them. But that only holds in regard to donating, not prioritization research or EA startup ideas.
I put “make do” in quotation marks because I actually put a lot of hope into the flow-through effects of deworming, bed nets, cash transfers, etc. for empowering the poor.
Thanks! I’ll add a note about the LW oversampling.
Update: There are a lot of people with a quantitative background in the movement (well, people like me), so they’ll probably have more fun studying interventions that are more cleanly quantifiable. But I think GiveWell, ACE, et al. do a good job of warning against exaggerated faith in individual cost-effectiveness estimates and probably don’t fall prey to it themselves.