Here’s a provocative take on your experience that I don’t really endorse, but I’d be interested in hearing your reaction to:
Finding unusually cost-effective global health charities isn’t actually a wicked problem. You just look into the existing literature on global health prioritization, apply a bunch of quick heuristics to find the top interventions, find charities implementing them, and then see which ones will get more done with more funding. In fact, Giving What We Can independently started recommending the Against Malaria Foundation through a process that was much faster than the above. Peter Singer also came up with donation recommendations that seem not much worse than current GiveWell top recommendations based on fairly limited research.
In response to such a comment, I might say that GiveWell actually had much more reason to think AMF was indeed one of the most cost-effective charities than GWWC, that Peter Singer’s recommendations were good but substantially less cost-effective (and that improvement is clearly worth it), and that the above illustration of the wicked problem experience is useful because it applies more strongly in other areas (e.g. AI forecasting). But I’m curious about your response.
I believe GWWC’s recommendation of Against Malaria Foundation was based on GiveWell’s (otherwise they might’ve recommended another bednet charity). And Peter Singer generally did not recommend the charities that GiveWell ranks highly, before GiveWell ranked them highly.
I don’t want to deny, though, that for any given research project you might undertake, there’s often a much quicker approach that gets you part of the way there. I think the process you described is a fine way to generate some good initial leads (I think GWWC independently recommended Schistosomiasis Control Initiative before GiveWell did, for example). As the stakes of the research rise, though, I think it becomes more valuable and important to get a lot of the details right—partly because so much money rides on it, partly because quicker approaches seem more vulnerable to adversarial behavior/Goodharting of the process.
Here’s a provocative take on your experience that I don’t really endorse, but I’d be interested in hearing your reaction to:
In response to such a comment, I might say that GiveWell actually had much more reason to think AMF was indeed one of the most cost-effective charities than GWWC, that Peter Singer’s recommendations were good but substantially less cost-effective (and that improvement is clearly worth it), and that the above illustration of the wicked problem experience is useful because it applies more strongly in other areas (e.g. AI forecasting). But I’m curious about your response.
Apologies for chiming in so late!
I believe GWWC’s recommendation of Against Malaria Foundation was based on GiveWell’s (otherwise they might’ve recommended another bednet charity). And Peter Singer generally did not recommend the charities that GiveWell ranks highly, before GiveWell ranked them highly.
I don’t want to deny, though, that for any given research project you might undertake, there’s often a much quicker approach that gets you part of the way there. I think the process you described is a fine way to generate some good initial leads (I think GWWC independently recommended Schistosomiasis Control Initiative before GiveWell did, for example). As the stakes of the research rise, though, I think it becomes more valuable and important to get a lot of the details right—partly because so much money rides on it, partly because quicker approaches seem more vulnerable to adversarial behavior/Goodharting of the process.