On the robustness of cost-effectiveness estimates

A good post from Jonah Sinick on the Robustness of cost-effectiveness estimates here. In brief: GiveWell have consistently found that the cost-effectiveness estimates of the best interventions are too optimistic. This is a to-be-expected example of regression to the mean . But Jonah has found that this effect has occurred much more than he would have thought before he worked for GiveWell. So he urges you to go with more robust evidence of good but lower cost-effectiveness over less robust but higher cost-effectiveness, and suggests that this might be a mistake that EAs make more generally.

I’m pretty sympathetic to the thrust of the post, but I had a couple of immediate thoughts.

First, Jonah talks about “lives saved per dollar”—but what EAs are ultimately concerned about is “good done per dollar”. But, for me, the amount of good I can do per dollar is far greater than I initially would have thought. This is because, in my initial analysis—and in what I’d presume are most people’s initial analyses—benefits to the long-term future of civilisation weren’t taken into account, or weren’t thought to be morally relevant. (Example: saving a life doesn’t just benefit that person, but also, because the person is economically productive, benefits all of society. Moreover, these economic benefits continue through the generations and in fact compound at a rate roughly equal to general economic growth). But those (expected) benefits strike me, and strike most people I’ve spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I’m able to exceed those by a far greater amount than I’d previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. So the lesson to take from past updates on evidence can look quite different depending on whether you’re talking about “good done per dollar” or “lives saved per dollar”, and the former is what we ultimately care about.

Second, something Jonah doesn’t mention is that, when you find out that your evidence is poorer than you’d thought, two general lessons are to pursue activities with high option value and to pay to gain new evidence (though this lesson do only follow if you think you can get a decent amount of new evidence in the future). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things, and it’s been a major factor in my decision to focus on building the effective altruism movement in general, rather than focusing on any one specific activity.