I really like the education review, it seems like a great introduction to the literature on effective education interventions. And it’s even better that you’ll be reviewing health interventions soon, given that they seem generally more effective than education, both in terms of certainty and overall impact.
But I would still have strong confidence that GiveWell’s top charities all have significantly higher expected value than the results of this investigation, for two reasons.
First, GiveWell has access to the internal workings of charities, allowing them to recommend charities that do a better job of achieving their intervention. This goes as far as GiveWell making almost a dozen site visits over the past five years to directly observe these charities in action. There’s just no way to replicate this without close, prolonged contact with all the relevant charities.
Second, GiveWell simply has more experience and expertise in development evaluations than someone doing this in their free time. It’s fantastic that you all are working with these donors, and your actions seem likely to have a strong impact. But GiveWell has 25 staff, a decade of experience in the area, and access to any relevant experts and insider information. It’s very difficult to replicate the quality of recommendations that come from that process. Doing the research yourself has other benefits: it increases engagement with the cause, it teaches a valuable skill, etc. But when there’s a million dollars to be donated, it might be best to trust GiveWell.
If the donors want an intervention that’s both certain and transformative, GiveDirectly seems like an obvious choice.
I generally agree with the above. I love GiveWell. However, I think doing your own charity evaluation has more benefits than just learning skills and becoming more engaged. A couple of extra benefits, off the top of my head:
-Doing your own charity evaluation means you can challenge GiveWell when you think they’ve gotten something wrong.
-Encouraging people other than GiveWell to do charity evaluation means we’re in a better position if GiveWell ever stops performing to its current standards (eg if 2-3 key staff members left at the same time).
-Investigating a particular area in depth which GiveWell hasn’t spent much time on recently, like education, could give the community access to useful information (maybe one of these charities is more effective than we think; maybe this list helps us pick a charity to donate to on behalf of our teacher friend; maybe it provides useful advice for EA Quebec’s donors!)
Good point, I wasn’t fully considering that. I think Michael Plant’s recent investigation into mental health as a cause area is a perfect example of the value of independent research—mental health isn’t something . While I still think it’s going to be extremely difficult to beat GiveWell in i.e., evaluating which deworming charity is most effective, or which health intervention tends to be most effective, I do think independent researchers can make important contributions in identifying GiveWell’s “blind spots”.
Mental health and education both could be good examples. At this point, GiveWell doesn’t recommend either. But they’re not areas that GiveWell has spent years building expertise in. So it’s reasonable to expect that, in these areas, a dedicated newcomer can produce research that rivals GiveWell’s in quality.
So I’d revise my stance to: Do your own research if there’s an upstream question (like the moral value of mental suffering, the validity of life satisfaction surveys, or the intrinsic value of education) that you think GiveWell might be wrong about. Often, you’ll conclude that they were right, but the value of uncovering their occasional mistakes is high. Still, trust GiveWell if you agree with their initial assumptions on what matters.
In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.
I really like the education review, it seems like a great introduction to the literature on effective education interventions. And it’s even better that you’ll be reviewing health interventions soon, given that they seem generally more effective than education, both in terms of certainty and overall impact.
But I would still have strong confidence that GiveWell’s top charities all have significantly higher expected value than the results of this investigation, for two reasons.
First, GiveWell has access to the internal workings of charities, allowing them to recommend charities that do a better job of achieving their intervention. This goes as far as GiveWell making almost a dozen site visits over the past five years to directly observe these charities in action. There’s just no way to replicate this without close, prolonged contact with all the relevant charities.
Second, GiveWell simply has more experience and expertise in development evaluations than someone doing this in their free time. It’s fantastic that you all are working with these donors, and your actions seem likely to have a strong impact. But GiveWell has 25 staff, a decade of experience in the area, and access to any relevant experts and insider information. It’s very difficult to replicate the quality of recommendations that come from that process. Doing the research yourself has other benefits: it increases engagement with the cause, it teaches a valuable skill, etc. But when there’s a million dollars to be donated, it might be best to trust GiveWell.
If the donors want an intervention that’s both certain and transformative, GiveDirectly seems like an obvious choice.
I generally agree with the above. I love GiveWell. However, I think doing your own charity evaluation has more benefits than just learning skills and becoming more engaged. A couple of extra benefits, off the top of my head:
-Doing your own charity evaluation means you can challenge GiveWell when you think they’ve gotten something wrong.
-Encouraging people other than GiveWell to do charity evaluation means we’re in a better position if GiveWell ever stops performing to its current standards (eg if 2-3 key staff members left at the same time).
-Investigating a particular area in depth which GiveWell hasn’t spent much time on recently, like education, could give the community access to useful information (maybe one of these charities is more effective than we think; maybe this list helps us pick a charity to donate to on behalf of our teacher friend; maybe it provides useful advice for EA Quebec’s donors!)
Good point, I wasn’t fully considering that. I think Michael Plant’s recent investigation into mental health as a cause area is a perfect example of the value of independent research—mental health isn’t something . While I still think it’s going to be extremely difficult to beat GiveWell in i.e., evaluating which deworming charity is most effective, or which health intervention tends to be most effective, I do think independent researchers can make important contributions in identifying GiveWell’s “blind spots”.
Mental health and education both could be good examples. At this point, GiveWell doesn’t recommend either. But they’re not areas that GiveWell has spent years building expertise in. So it’s reasonable to expect that, in these areas, a dedicated newcomer can produce research that rivals GiveWell’s in quality.
So I’d revise my stance to: Do your own research if there’s an upstream question (like the moral value of mental suffering, the validity of life satisfaction surveys, or the intrinsic value of education) that you think GiveWell might be wrong about. Often, you’ll conclude that they were right, but the value of uncovering their occasional mistakes is high. Still, trust GiveWell if you agree with their initial assumptions on what matters.
In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.