I think itâs fair to characterise our evaluations as looking for the âbestâ charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though weâre looking to recommend the best charities, we donât think this means that thereâs no value in looking into âgreat-charity evaluatorsâ as you called them. We donât have an all-or-nothing approach when looking into an evaluatorsâ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as itâs possible some of the recommendations of a âgreat-charity evaluatorâ are the best by a particular worldview, weâd see value in looking into them.
In one sense, this increases the bar for our evaluations, but in another it also means an evaluatorâs recommendations might be the best even if we werenât particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).
Itâs too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors âmaximiseâ expected cost-effectiveness, rather than âmaximisingâ the number of donors giving cost-effectively /â providing a variety of âgreat-but-not-bestâ options.
You might also explicitly state that you donât intend to evaluate great-charity recommenders at least at this time.
As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview).
but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators
Iâd be interested in where you think we could improve our communications here. Part of the challenge weâve faced is that we want to be careful not to overstate our work. For example, âwe only provide recommendations from the best evaluators we know of and have looked intoâ, is accurate, but âwe only provide recommendations from the best evaluatorsâ is not (because there are evaluators we havenât looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise â we also donât want to understate our work!
This is a really insightful question!
I think itâs fair to characterise our evaluations as looking for the âbestâ charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though weâre looking to recommend the best charities, we donât think this means that thereâs no value in looking into âgreat-charity evaluatorsâ as you called them. We donât have an all-or-nothing approach when looking into an evaluatorsâ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as itâs possible some of the recommendations of a âgreat-charity evaluatorâ are the best by a particular worldview, weâd see value in looking into them.
In one sense, this increases the bar for our evaluations, but in another it also means an evaluatorâs recommendations might be the best even if we werenât particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).
Itâs too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors âmaximiseâ expected cost-effectiveness, rather than âmaximisingâ the number of donors giving cost-effectively /â providing a variety of âgreat-but-not-bestâ options.
As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview).
Iâd be interested in where you think we could improve our communications here. Part of the challenge weâve faced is that we want to be careful not to overstate our work. For example, âwe only provide recommendations from the best evaluators we know of and have looked intoâ, is accurate, but âwe only provide recommendations from the best evaluatorsâ is not (because there are evaluators we havenât looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise â we also donât want to understate our work!