I can also vouch for HLI. Per John Salter’s comment, I may also have been a little sus early (sorry Michael) on but HLI’s work has been extremely valuable for our own methodology improvements at Founders Pledge. The whole team is great, and I will second John’s comment to the effect that Joel’s expertise is really rare and that HLI seems to be the right home for it.
I appreciate this kind of transparent vouching for orgs. Makes it easier to discuss what’s going on.
How do you think you’ll square this if the forthcoming RCT downgrades StrongMind’s work by a factor of 4 or more? I’m confused about how HLI could miss this error (if it happens)
That said, as John says their actual produced work could still be very cheap at this price.
I guess I would very slightly adjust my sense of HLI, but I wouldn’t really think of this as an “error.” I don’t significantly adjust my view of GiveWell when they delist a charity based on new information.
I think if the RCT downgrades StrongMinds’ work by a big factor, that won’t really introduce new information about HLI’s methodology/expertise. If you think there are methodological weaknesses that would cause them to overstate StrongMinds’ impact, those weaknesses should be visible now, irrespective of the RCT results.
So, for clarity, you disagree with @Gregory Lewis[1] here:
Regrettably, it is hard to square this with an unfortunate series of honest mistakes. A better explanation is HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment for an intervention where some results were much better for their agenda than others (cf.). I am sceptical this only applies to the SM evaluation, and I am pessimistic this will improve with further financial support.
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that “HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment.” I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI’s mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM’s rating — we use HLI’s work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell’s analysis. But our analysis was facilitated significantly by HLI’s work, which remains useful despite its flaws.
I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either.
Individual donors are, however, more likely to take a charity recommender’s analysis largely on faith—because they do not have the time or the specialized knowledge and skills necessary to kick the tires. For those donors, the main point of consulting a charity recommender is to delegate the tire-kicking duties to someone who has the time, knowledge, and skills to do that.
I can also vouch for HLI. Per John Salter’s comment, I may also have been a little sus early (sorry Michael) on but HLI’s work has been extremely valuable for our own methodology improvements at Founders Pledge. The whole team is great, and I will second John’s comment to the effect that Joel’s expertise is really rare and that HLI seems to be the right home for it.
I appreciate this kind of transparent vouching for orgs. Makes it easier to discuss what’s going on.
How do you think you’ll square this if the forthcoming RCT downgrades StrongMind’s work by a factor of 4 or more? I’m confused about how HLI could miss this error (if it happens)
That said, as John says their actual produced work could still be very cheap at this price.
I guess I would very slightly adjust my sense of HLI, but I wouldn’t really think of this as an “error.” I don’t significantly adjust my view of GiveWell when they delist a charity based on new information.
I think if the RCT downgrades StrongMinds’ work by a big factor, that won’t really introduce new information about HLI’s methodology/expertise. If you think there are methodological weaknesses that would cause them to overstate StrongMinds’ impact, those weaknesses should be visible now, irrespective of the RCT results.
So, for clarity, you disagree with @Gregory Lewis[1] here:
How do i do the @ search?
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that “HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment.” I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI’s mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM’s rating — we use HLI’s work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell’s analysis. But our analysis was facilitated significantly by HLI’s work, which remains useful despite its flaws.
Individual donors are, however, more likely to take a charity recommender’s analysis largely on faith—because they do not have the time or the specialized knowledge and skills necessary to kick the tires. For those donors, the main point of consulting a charity recommender is to delegate the tire-kicking duties to someone who has the time, knowledge, and skills to do that.
Hello Matt and thanks for your overall vote of confidence, including your comments below to Nathan.
Could you expand on what you said here?
I’m curious to know why you were originally suspicious and what changed your mind. Sorry if you’ve already stated that below.