So, for clarity, you disagree with @Gregory Lewis[1] here:
Regrettably, it is hard to square this with an unfortunate series of honest mistakes. A better explanation is HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment for an intervention where some results were much better for their agenda than others (cf.). I am sceptical this only applies to the SM evaluation, and I am pessimistic this will improve with further financial support.
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that “HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment.” I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI’s mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM’s rating — we use HLI’s work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell’s analysis. But our analysis was facilitated significantly by HLI’s work, which remains useful despite its flaws.
I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either.
Individual donors are, however, more likely to take a charity recommender’s analysis largely on faith—because they do not have the time or the specialized knowledge and skills necessary to kick the tires. For those donors, the main point of consulting a charity recommender is to delegate the tire-kicking duties to someone who has the time, knowledge, and skills to do that.
So, for clarity, you disagree with @Gregory Lewis[1] here:
How do i do the @ search?
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that “HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment.” I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI’s mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM’s rating — we use HLI’s work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell’s analysis. But our analysis was facilitated significantly by HLI’s work, which remains useful despite its flaws.
Individual donors are, however, more likely to take a charity recommender’s analysis largely on faith—because they do not have the time or the specialized knowledge and skills necessary to kick the tires. For those donors, the main point of consulting a charity recommender is to delegate the tire-kicking duties to someone who has the time, knowledge, and skills to do that.