(I no longer work at GWWC, but wrote the reports on the LTFF/âECF, and was involved in the first round of evaluations more generally.)
In general, I think GWWCâs goal here is to âto support donors in having the highest expected impact given their worldviewâ which can come apart from supporting donors to give to the most well-researched/âvetted funding opportunities. For instance, if you have a longtermist worldview, or perhaps take AI x-risk very seriously, then Iâd guess youâd still want to give to the LTFF/âECF even if you thought the quality of their evaluations was lower than GiveWellâs.
Finally, the quality of our recommendations is highly dependent on the quality of the charity evaluation field in a cause area, and hence inconsistent across cause areas. For example, the state of charity evaluation in animal welfare is less advanced than that in global health and wellbeing, so our evaluations and the resulting recommendations in animal welfare are necessarily lower-confidence than those in global health and wellbeing.
And also in each of the individual reports, e.g. from the ACE MG report:
As such, our bar for relying on an evaluator depends on the existence and quality of other donation options we have evaluated in the same cause area.
In cause areas where we currently rely on one or more evaluators that have passed our bar in a previous evaluation, any new evaluations we do will attempt to compare the quality of the evaluatorâs marginal grantmaking and/âor charity recommendations to those of the evaluator(s) we already rely on in that cause area.
For worldviews and associated cause areas where we donât have existing evaluators we rely on, we expect evaluators to meet the bar of plausibly recommending giving opportunities that are among the best options for their stated worldview, compared to any other opportunity easily accessible to donors.
Mmm, so maybe the crux is at (3) or (4)? I think that GWWC may be assuming too much about how viewers are interpreting the messaging and presentation around the evaluations. I think there is probably a way to signal the differences in evaluation strength while still maintaining the BYO worldview approach?
Just speaking for myself, Iâd guess those would be the cruxes, though I donât personally see easy-fixes. I also worry that you could also err on being too cautious, by potential adding warning labels that give people an overly negative impression compared to the underlying reality. Iâm curious if there are examples where you think GWWC could strike a better balance.
I think this might be symptomatic of a broader challenge for effective giving for GCR, which is that most of the canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that donât clearly generalize to the GCR space. But I donât think that indicates you shouldnât give to GCR, or care about cost-effectiveness in the GCR space â from a very plausible worldview (or at least, the worldview I have!) the GCR-focused funding opportunities are the most impactful funding opportunities available. Itâs just that the kind of reasoning underlying those recommendations/âevaluations are quite different.
canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that donât clearly generalize to the GCR space.
I am not sure I understand the claim being made here. Do you believe this to be the case, because of a tension between hits based and cost-effective giving?
If so, I may disagree with the point. Fundamentally if youâre an âhitâ grant-maker, you still care about (1) The amount of impact as a result of a hit (2) the odds on getting a hit (3) Indicators which may lead up to getting a hit (4) The marginal impact of your grant.
1&2) Require solid theory of change, and BOTEC EV calculations 3) Good M&E
Fundamentally, I wouldnât see much of a tension between hits based and cost-effective giving, other than a much higher tolerance for risk.
I suppose to tack onto Elliotâs answer, Iâm curious about what you see the differences in reasoning to be. If it is merely that GCR giving opportunities are more hits-based /â high variance, I could see, for example, a small label being applied on the GWWC website next to higher-risk opportunities with a link to something like the explanations youâve written above (and the evaluation reports).
That kind of labelling feels like only a quantitative difference from the current binary evaluations (as in, currently GWWC signals inclusion/âexclusion, but could extend that to signal for strength of evaluation or risk of opportunity).
(I no longer work at GWWC, but wrote the reports on the LTFF/âECF, and was involved in the first round of evaluations more generally.)
In general, I think GWWCâs goal here is to âto support donors in having the highest expected impact given their worldviewâ which can come apart from supporting donors to give to the most well-researched/âvetted funding opportunities. For instance, if you have a longtermist worldview, or perhaps take AI x-risk very seriously, then Iâd guess youâd still want to give to the LTFF/âECF even if you thought the quality of their evaluations was lower than GiveWellâs.
Some of this is discussed in âWhy and how GWWC evaluates evaluatorsâ in the limitations section:
And also in each of the individual reports, e.g. from the ACE MG report:
Mmm, so maybe the crux is at (3) or (4)? I think that GWWC may be assuming too much about how viewers are interpreting the messaging and presentation around the evaluations. I think there is probably a way to signal the differences in evaluation strength while still maintaining the BYO worldview approach?
Just speaking for myself, Iâd guess those would be the cruxes, though I donât personally see easy-fixes. I also worry that you could also err on being too cautious, by potential adding warning labels that give people an overly negative impression compared to the underlying reality. Iâm curious if there are examples where you think GWWC could strike a better balance.
I think this might be symptomatic of a broader challenge for effective giving for GCR, which is that most of the canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that donât clearly generalize to the GCR space. But I donât think that indicates you shouldnât give to GCR, or care about cost-effectiveness in the GCR space â from a very plausible worldview (or at least, the worldview I have!) the GCR-focused funding opportunities are the most impactful funding opportunities available. Itâs just that the kind of reasoning underlying those recommendations/âevaluations are quite different.
I am not sure I understand the claim being made here. Do you believe this to be the case, because of a tension between hits based and cost-effective giving?
If so, I may disagree with the point. Fundamentally if youâre an âhitâ grant-maker, you still care about (1) The amount of impact as a result of a hit (2) the odds on getting a hit (3) Indicators which may lead up to getting a hit (4) The marginal impact of your grant.
1&2) Require solid theory of change, and BOTEC EV calculations
3) Good M&E
Fundamentally, I wouldnât see much of a tension between hits based and cost-effective giving, other than a much higher tolerance for risk.
I suppose to tack onto Elliotâs answer, Iâm curious about what you see the differences in reasoning to be. If it is merely that GCR giving opportunities are more hits-based /â high variance, I could see, for example, a small label being applied on the GWWC website next to higher-risk opportunities with a link to something like the explanations youâve written above (and the evaluation reports).
That kind of labelling feels like only a quantitative difference from the current binary evaluations (as in, currently GWWC signals inclusion/âexclusion, but could extend that to signal for strength of evaluation or risk of opportunity).