I think it’s fair to characterise our evaluations as looking for the “best” charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though we’re looking to recommend the best charities, we don’t think this means that there’s no value in looking into “great-charity evaluators” as you called them. We don’t have an all-or-nothing approach when looking into an evaluators’ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as it’s possible some of the recommendations of a “great-charity evaluator” are the best by a particular worldview, we’d see value in looking into them.
In one sense, this increases the bar for our evaluations, but in another it also means an evaluator’s recommendations might be the best even if we weren’t particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).
It’s too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors “maximise” expected cost-effectiveness, rather than “maximising” the number of donors giving cost-effectively / providing a variety of “great-but-not-best” options.
You might also explicitly state that you don’t intend to evaluate great-charity recommenders at least at this time.
As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview).
but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators
I’d be interested in where you think we could improve our communications here. Part of the challenge we’ve faced is that we want to be careful not to overstate our work. For example, “we only provide recommendations from the best evaluators we know of and have looked into”, is accurate, but “we only provide recommendations from the best evaluators” is not (because there are evaluators we haven’t looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise — we also don’t want to understate our work!
This is a really insightful question!
I think it’s fair to characterise our evaluations as looking for the “best” charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though we’re looking to recommend the best charities, we don’t think this means that there’s no value in looking into “great-charity evaluators” as you called them. We don’t have an all-or-nothing approach when looking into an evaluators’ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as it’s possible some of the recommendations of a “great-charity evaluator” are the best by a particular worldview, we’d see value in looking into them.
In one sense, this increases the bar for our evaluations, but in another it also means an evaluator’s recommendations might be the best even if we weren’t particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).
It’s too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors “maximise” expected cost-effectiveness, rather than “maximising” the number of donors giving cost-effectively / providing a variety of “great-but-not-best” options.
As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview).
I’d be interested in where you think we could improve our communications here. Part of the challenge we’ve faced is that we want to be careful not to overstate our work. For example, “we only provide recommendations from the best evaluators we know of and have looked into”, is accurate, but “we only provide recommendations from the best evaluators” is not (because there are evaluators we haven’t looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise — we also don’t want to understate our work!