+1 Regarding extending the principle of charity towards HLI. Anecdotally it seems very common for initial CEA estimates to be revised down as the analysis is critiqued. I think HLI has done an exceptional job at being transparent and open regarding their methodology and the source of disagreements e.g. see Joel’s comment outlining the sources of disagreement between HLI and GiveWell, which I thought were really exceptional (https://forum.effectivealtruism.org/posts/h5sJepiwGZLbK476N/assessment-of-happier-lives-institute-s-cost-effectiveness?commentId=LqFS5yHdRcfYmX9jw). Obviously I haven’t spent as much time digging into the results as Gregory has made, but the mistakes he points to don’t seem like the kind that should be treated too harshly.
As a separate point, I think it’s generally a lot easier to critique and build upon an analysis after the initial work has been done. E.g. even if it is the case that SimonM’s assessment of Strong Minds is more reliable than HLI’s (HLI seem to dispute that the critique he levies are all that important as they only assign a 13% weight to that RCT), this isn’t necessarily evidence that SimonM is more competent than the HLI team. When the heavy lifting has been done, it’s easier to focus in on particular mistakes (and of course valuable to do so!).
+1 Regarding extending the principle of charity towards HLI. Anecdotally it seems very common for initial CEA estimates to be revised down as the analysis is critiqued. I think HLI has done an exceptional job at being transparent and open regarding their methodology and the source of disagreements e.g. see Joel’s comment outlining the sources of disagreement between HLI and GiveWell, which I thought were really exceptional (https://forum.effectivealtruism.org/posts/h5sJepiwGZLbK476N/assessment-of-happier-lives-institute-s-cost-effectiveness?commentId=LqFS5yHdRcfYmX9jw). Obviously I haven’t spent as much time digging into the results as Gregory has made, but the mistakes he points to don’t seem like the kind that should be treated too harshly.
As a separate point, I think it’s generally a lot easier to critique and build upon an analysis after the initial work has been done. E.g. even if it is the case that SimonM’s assessment of Strong Minds is more reliable than HLI’s (HLI seem to dispute that the critique he levies are all that important as they only assign a 13% weight to that RCT), this isn’t necessarily evidence that SimonM is more competent than the HLI team. When the heavy lifting has been done, it’s easier to focus in on particular mistakes (and of course valuable to do so!).