I have two suggestions/ārequests for ācrosstabsā on this info (which is naturally organised by evaluator, because thatās what the project is!):
As-of-today, which evaluators/ācharities sit where on the recommendation scale. The info for that is mostly on GWWCās website but not quite organised as such. Iām thinking of rows for cause areas, columns for buckets, e.g. āRecommendedā at one end and āMaybe not cost-effectiveā at the other (though maybe youād drop things off altogether). Just something to help visualise whatās moved and by how much, and broadly why are things sitting where they are (e.g. THL corporate campaigns sliding off the recommended list for āproceduralā reasons, so not in the Recommended column but now in a āNearlyā column or something).
Iād love a clear checklist of what you think needs improvement per evaluated program to help with making the list a little more evergreen. I think all that info is in your reporting, but if you called it out I think it would
help evaluated programs and
help donors to
get a sense for how up-to-date that recommendation is (given the rotating/ārolling nature of the evaluation program)
and possibly do their own assessment for whether the charity āshouldā be recommended ānowā.
Thanks for the comment ā we appreciate the suggestions!
With respect to your first suggestion, I want to clarify that our goal with this project is to identify evaluators that recommend among the most cost-effective opportunities in each cause area according to a sufficiently plausible worldview. This means among our recommendations we donāt have a view about which is more cost-effective, and we donāt try to rank the evaluators that we donāt choose to rely on. That said, I can think of two resources that might somewhat address your suggestion:
This section on our 2024 evaluating evaluators page explains which programs have changed status following our 2024 evaluations and why
In the other supported programs section of our donation platform, we roughly order the programs based on our preliminary impression of which might be most interesting to impact-focused donors in each cause area. To do this we take into account factors like if weāve previously recommended them and if they are currently recommended by an impact-focused evaluator.
With respect to your second suggestion, while we donāt include a checklist as such, we try to include the major areas for improvement in the conclusion section of each report. In future we might consider organising these more clearly and making them more prominent.
Love to see these reports!
I have two suggestions/ārequests for ācrosstabsā on this info (which is naturally organised by evaluator, because thatās what the project is!):
As-of-today, which evaluators/ācharities sit where on the recommendation scale. The info for that is mostly on GWWCās website but not quite organised as such. Iām thinking of rows for cause areas, columns for buckets, e.g. āRecommendedā at one end and āMaybe not cost-effectiveā at the other (though maybe youād drop things off altogether). Just something to help visualise whatās moved and by how much, and broadly why are things sitting where they are (e.g. THL corporate campaigns sliding off the recommended list for āproceduralā reasons, so not in the Recommended column but now in a āNearlyā column or something).
Iād love a clear checklist of what you think needs improvement per evaluated program to help with making the list a little more evergreen. I think all that info is in your reporting, but if you called it out I think it would
help evaluated programs and
help donors to
get a sense for how up-to-date that recommendation is (given the rotating/ārolling nature of the evaluation program)
and possibly do their own assessment for whether the charity āshouldā be recommended ānowā.
Thanks for the comment ā we appreciate the suggestions!
With respect to your first suggestion, I want to clarify that our goal with this project is to identify evaluators that recommend among the most cost-effective opportunities in each cause area according to a sufficiently plausible worldview. This means among our recommendations we donāt have a view about which is more cost-effective, and we donāt try to rank the evaluators that we donāt choose to rely on. That said, I can think of two resources that might somewhat address your suggestion:
This section on our 2024 evaluating evaluators page explains which programs have changed status following our 2024 evaluations and why
In the other supported programs section of our donation platform, we roughly order the programs based on our preliminary impression of which might be most interesting to impact-focused donors in each cause area. To do this we take into account factors like if weāve previously recommended them and if they are currently recommended by an impact-focused evaluator.
With respect to your second suggestion, while we donāt include a checklist as such, we try to include the major areas for improvement in the conclusion section of each report. In future we might consider organising these more clearly and making them more prominent.