Which GiveWell evaluation(s) though? The ones on that spreadsheet range from the evaluations used to justify Top Charity status to decisions to deprioritize a potential program after a shallow review. Two deworming charities were until recently GiveWell Top Charities, and I believe Open Phil still makes significant grants to them (presumably in reliance on GiveWell’s work).
In this post, HLI explicitly compares its evaluation of StrongMinds to GiveWell’s evaluation of AMF, and says:
“At one end, AMF is 1.3x better than StrongMinds. At the other, StrongMinds is 12x better than AMF. Ultimately, AMF is less cost-effective than StrongMinds under almost all assumptions.
Our general recommendation to donors is StrongMinds.”
This seems like an argument for scrutinizing HLI’s evaluation of StrongMinds just as closely as we’d scrutinize GiveWell’s evaluation of AMF (i.e., closely). I apologize for the trite analogy, but: if every year Bob’s blueberry pie wins the prize for best pie at the state fair, and this year Jim, a newcomer, is claiming that his blueberry pie is better than Bob’s, this isn’t an argument for employing a more lax standard of judging for Jim’s pie. Nor do I see how concluding that Jim’s pie isn’t the best pie this year—but here’s a lot of feedback on how Jim can improve his pie for next year—undermines Jim’s ability to win pie competitions going forward.
This isn’t to say that we should expect the claims in HLI’s evaluation to be backed by the same level of evidence as GiveWell’s, but we should be able to take a hard look at HLI’s report and determine that the strong claims made on its basis are (somewhat) justified.
Yes, agree that the language re: AMF justifies a higher level of scrutiny than would be warranted in its absence. Also, the AMF-related claim makes more moderate changes in the CEA bottom-line material than if the claims had been limited to stuff like: SM is more cost-effective than other predominately life-enhancing charities like GiveDirectly.
Which GiveWell evaluation(s) though? The ones on that spreadsheet range from the evaluations used to justify Top Charity status to decisions to deprioritize a potential program after a shallow review. Two deworming charities were until recently GiveWell Top Charities, and I believe Open Phil still makes significant grants to them (presumably in reliance on GiveWell’s work).
In this post, HLI explicitly compares its evaluation of StrongMinds to GiveWell’s evaluation of AMF, and says:
This seems like an argument for scrutinizing HLI’s evaluation of StrongMinds just as closely as we’d scrutinize GiveWell’s evaluation of AMF (i.e., closely). I apologize for the trite analogy, but: if every year Bob’s blueberry pie wins the prize for best pie at the state fair, and this year Jim, a newcomer, is claiming that his blueberry pie is better than Bob’s, this isn’t an argument for employing a more lax standard of judging for Jim’s pie. Nor do I see how concluding that Jim’s pie isn’t the best pie this year—but here’s a lot of feedback on how Jim can improve his pie for next year—undermines Jim’s ability to win pie competitions going forward.
This isn’t to say that we should expect the claims in HLI’s evaluation to be backed by the same level of evidence as GiveWell’s, but we should be able to take a hard look at HLI’s report and determine that the strong claims made on its basis are (somewhat) justified.
Yes, agree that the language re: AMF justifies a higher level of scrutiny than would be warranted in its absence. Also, the AMF-related claim makes more moderate changes in the CEA bottom-line material than if the claims had been limited to stuff like: SM is more cost-effective than other predominately life-enhancing charities like GiveDirectly.