I agreed with Elliott’s comment, but for a somewhat different reason that I thought might be worth sharing. The “Don’t just give well, give WELLBYs” post gave me a clear feeling that HLI was trying to position itself as the Happiness/Well-Being GiveWell, including by promoting StrongMinds as more effective than programs run by classic GW top charities. A skim of HLI’s website gives me the same impression, although somewhat less strongly than that post.
The problem as I see it is that when you set GiveWell up as your comparison point, people are likely to expect a GiveWell-type balance in your presentation (and I think that expectation is generally reasonable). For instance, when GiveWell had deworming programs as a top charity option, it was pretty clear to me within a few minutes of reading their material that the evidence base for this intervention had some issues and its top-charity status was based on a huge potential upside-for-cost. When GiveWell had standout charities, it was very clear that the depth of research and investigation behind those programs was roughly an order of magnitude or so less than for the top charities. Although I didn’t read everything on HLI’s website, I did not walk away with the impression that the methodological weaknesses discussed in this and other threads were disclosed and discussed very much (or nearly as much as I would expect GiveWell to have done in analogous circumstances).
The fact that HLI seems to be consciously positioning itself as in the GiveWellian tradition yet lacks this balance in its presentations is, I think, what gives off the “advocacy organisation” vibes to me. (Of course, its not reasonable for anyone to expect HLI to have done the level of vetting that GiveWell has done for its top charities—so I don’t mean to suggest the lesser degree of vetting at this point is the issue.)
“Happiness/Wellbeing GiveWell” is a fair description of HLI in my opinion. However, I want to push back on your claim that GiveWell is more open and balanced.
As far as I can tell, there is nothing new in Simon’s post or subsequent comments that we haven’t already discussed in our psychotherapy and StrongMinds cost-effectiveness analyses. I’m looking forward to reading his future blog post on our analysis and I’m glad it’s being subjected to external scrutiny.
Whereas, GiveWell acknowledge they need to improve their reasoning transparency:
Where we’d like to improve on reasoning transparency
We also agree with HLI that we have room for improvement on explaining our cost-effectiveness models. The decision about how to model whether benefits decline is an example of that—the reasoning I outlined above isn’t on our website. We only wrote, “the KLPS 4 results are smaller in magnitude (on a percentage increase basis) and higher variance than earlier survey rounds.”
We plan to update our website to make it clearer what key judgment calls are driving our cost-effectiveness estimates, why we’ve chosen specific parameters or made key assumptions, and how we’ve prioritized research questions that could potentially change our bottom line.
That’s just my opinion though and I don’t want to get into a debate about it here. Instead, I think we should all wait for GWWC to complete their independent evaluation of evaluators before drawing any strong conclusions about the relative strengths and weaknesses of the GiveWell and HLI methodologies.
To clarify, the bar I am suggesting here is something like: “After engaging with the recommender’s donor-facing materials about the recommended charity for 7-10 minutes, most potential donors should have a solid understanding of the quality of evidence and degree of uncertainty behind the recommendation; this will often include at least a brief mention of any major technical issues that might significantly alter the decision of a significant number of donors.”
Information in a CEA does not affect my evaluation of this bar very much. For qualify in my mind as “primarily a research and donor advisory organisation” (to use Elliot’s terminology), the organization should be communicating balanced information about evidence quality and degree of uncertainty fairly early in the donor-communication process. It’s not enough that the underlying information can be found somewhere in 77 pages of the CEAs you linked.
To analogize, if I were looking for information about a prescription drug, and visited a website I thought was patient-advisory rather than advocacy, I would expect to see a fair discussion of major risks and downsides within the first ten minutes of patient-friendly material rather than being only in the prescribing information (which, like the CEA, is a technical document).
I recognize that meeting the bar I suggested above will require HLI to communicate more doubt about that GiveWell needs to communicate about its four currently recommended charities; that is an unavoidable effect of the fact that GiveWell has had many years and millions of dollars to target the major sources of doubt on those interventions as applied to their effectiveness metrics, and HLI has not.
I want to close by affirming that HLI is asking important questions, and that there is real value in not being too tied to a single evaluator or evaluation methodology. That’s why I (and I assume others) took the time to write what I think is actionable feedback on how HLI can better present itself as a donor-advisory organization and send off fewer “advocacy group” vibes. So none of this is intended as a broad criticism of HLI’s existence. Rather, it is specifically about my perception that HLI is not adequately communicating information about evidence quality and degree of uncertainty in medium-form communications to donors.
I read this comment as implying that HLI’s reasoning transparency is currently better than Givewell’s, and think that this is both:
False.
Not the sort of thing it is reasonable to bring up before immediately hiding behind “that’s just my opinion and I don’t want to get into a debate about it here”.
I therefore downvoted, as well as disagree voting. I don’t think downvotes always need comments, but this one seemed worth explaining as the comment contains several statements people might reasonably disagree with.
Thanks for explaining your reasoning for the downvote.
I don’t expect everyone to agree with my comment but if you think it is false then you should explain why you think that. I value all feedback on how HLI can improve our reasoning transparency.
However, like I said, I’m going to wait for GWWC’s evaluation before expressing any further personal opinions on this matter.
I agreed with Elliott’s comment, but for a somewhat different reason that I thought might be worth sharing. The “Don’t just give well, give WELLBYs” post gave me a clear feeling that HLI was trying to position itself as the Happiness/Well-Being GiveWell, including by promoting StrongMinds as more effective than programs run by classic GW top charities. A skim of HLI’s website gives me the same impression, although somewhat less strongly than that post.
The problem as I see it is that when you set GiveWell up as your comparison point, people are likely to expect a GiveWell-type balance in your presentation (and I think that expectation is generally reasonable). For instance, when GiveWell had deworming programs as a top charity option, it was pretty clear to me within a few minutes of reading their material that the evidence base for this intervention had some issues and its top-charity status was based on a huge potential upside-for-cost. When GiveWell had standout charities, it was very clear that the depth of research and investigation behind those programs was roughly an order of magnitude or so less than for the top charities. Although I didn’t read everything on HLI’s website, I did not walk away with the impression that the methodological weaknesses discussed in this and other threads were disclosed and discussed very much (or nearly as much as I would expect GiveWell to have done in analogous circumstances).
The fact that HLI seems to be consciously positioning itself as in the GiveWellian tradition yet lacks this balance in its presentations is, I think, what gives off the “advocacy organisation” vibes to me. (Of course, its not reasonable for anyone to expect HLI to have done the level of vetting that GiveWell has done for its top charities—so I don’t mean to suggest the lesser degree of vetting at this point is the issue.)
“Happiness/Wellbeing GiveWell” is a fair description of HLI in my opinion. However, I want to push back on your claim that GiveWell is more open and balanced.
As far as I can tell, there is nothing new in Simon’s post or subsequent comments that we haven’t already discussed in our psychotherapy and StrongMinds cost-effectiveness analyses. I’m looking forward to reading his future blog post on our analysis and I’m glad it’s being subjected to external scrutiny.
Whereas, GiveWell acknowledge they need to improve their reasoning transparency:
That’s just my opinion though and I don’t want to get into a debate about it here. Instead, I think we should all wait for GWWC to complete their independent evaluation of evaluators before drawing any strong conclusions about the relative strengths and weaknesses of the GiveWell and HLI methodologies.
To clarify, the bar I am suggesting here is something like: “After engaging with the recommender’s donor-facing materials about the recommended charity for 7-10 minutes, most potential donors should have a solid understanding of the quality of evidence and degree of uncertainty behind the recommendation; this will often include at least a brief mention of any major technical issues that might significantly alter the decision of a significant number of donors.”
Information in a CEA does not affect my evaluation of this bar very much. For qualify in my mind as “primarily a research and donor advisory organisation” (to use Elliot’s terminology), the organization should be communicating balanced information about evidence quality and degree of uncertainty fairly early in the donor-communication process. It’s not enough that the underlying information can be found somewhere in 77 pages of the CEAs you linked.
To analogize, if I were looking for information about a prescription drug, and visited a website I thought was patient-advisory rather than advocacy, I would expect to see a fair discussion of major risks and downsides within the first ten minutes of patient-friendly material rather than being only in the prescribing information (which, like the CEA, is a technical document).
I recognize that meeting the bar I suggested above will require HLI to communicate more doubt about that GiveWell needs to communicate about its four currently recommended charities; that is an unavoidable effect of the fact that GiveWell has had many years and millions of dollars to target the major sources of doubt on those interventions as applied to their effectiveness metrics, and HLI has not.
I want to close by affirming that HLI is asking important questions, and that there is real value in not being too tied to a single evaluator or evaluation methodology. That’s why I (and I assume others) took the time to write what I think is actionable feedback on how HLI can better present itself as a donor-advisory organization and send off fewer “advocacy group” vibes. So none of this is intended as a broad criticism of HLI’s existence. Rather, it is specifically about my perception that HLI is not adequately communicating information about evidence quality and degree of uncertainty in medium-form communications to donors.
I read this comment as implying that HLI’s reasoning transparency is currently better than Givewell’s, and think that this is both:
False.
Not the sort of thing it is reasonable to bring up before immediately hiding behind “that’s just my opinion and I don’t want to get into a debate about it here”.
I therefore downvoted, as well as disagree voting. I don’t think downvotes always need comments, but this one seemed worth explaining as the comment contains several statements people might reasonably disagree with.
Thanks for explaining your reasoning for the downvote.
I don’t expect everyone to agree with my comment but if you think it is false then you should explain why you think that. I value all feedback on how HLI can improve our reasoning transparency.
However, like I said, I’m going to wait for GWWC’s evaluation before expressing any further personal opinions on this matter.