I have two suggestions/ârequests for âcrosstabsâ on this info (which is naturally organised by evaluator, because thatâs what the project is!):
As-of-today, which evaluators/âcharities sit where on the recommendation scale. The info for that is mostly on GWWCâs website but not quite organised as such. Iâm thinking of rows for cause areas, columns for buckets, e.g. âRecommendedâ at one end and âMaybe not cost-effectiveâ at the other (though maybe youâd drop things off altogether). Just something to help visualise whatâs moved and by how much, and broadly why are things sitting where they are (e.g. THL corporate campaigns sliding off the recommended list for âproceduralâ reasons, so not in the Recommended column but now in a âNearlyâ column or something).
Iâd love a clear checklist of what you think needs improvement per evaluated program to help with making the list a little more evergreen. I think all that info is in your reporting, but if you called it out I think it would
help evaluated programs and
help donors to
get a sense for how up-to-date that recommendation is (given the rotating/ârolling nature of the evaluation program)
and possibly do their own assessment for whether the charity âshouldâ be recommended ânowâ.
Thanks for the comment â we appreciate the suggestions!
With respect to your first suggestion, I want to clarify that our goal with this project is to identify evaluators that recommend among the most cost-effective opportunities in each cause area according to a sufficiently plausible worldview. This means among our recommendations we donât have a view about which is more cost-effective, and we donât try to rank the evaluators that we donât choose to rely on. That said, I can think of two resources that might somewhat address your suggestion:
This section on our 2024 evaluating evaluators page explains which programs have changed status following our 2024 evaluations and why
In the other supported programs section of our donation platform, we roughly order the programs based on our preliminary impression of which might be most interesting to impact-focused donors in each cause area. To do this we take into account factors like if weâve previously recommended them and if they are currently recommended by an impact-focused evaluator.
With respect to your second suggestion, while we donât include a checklist as such, we try to include the major areas for improvement in the conclusion section of each report. In future we might consider organising these more clearly and making them more prominent.
Love to see these reports!
I have two suggestions/ârequests for âcrosstabsâ on this info (which is naturally organised by evaluator, because thatâs what the project is!):
As-of-today, which evaluators/âcharities sit where on the recommendation scale. The info for that is mostly on GWWCâs website but not quite organised as such. Iâm thinking of rows for cause areas, columns for buckets, e.g. âRecommendedâ at one end and âMaybe not cost-effectiveâ at the other (though maybe youâd drop things off altogether). Just something to help visualise whatâs moved and by how much, and broadly why are things sitting where they are (e.g. THL corporate campaigns sliding off the recommended list for âproceduralâ reasons, so not in the Recommended column but now in a âNearlyâ column or something).
Iâd love a clear checklist of what you think needs improvement per evaluated program to help with making the list a little more evergreen. I think all that info is in your reporting, but if you called it out I think it would
help evaluated programs and
help donors to
get a sense for how up-to-date that recommendation is (given the rotating/ârolling nature of the evaluation program)
and possibly do their own assessment for whether the charity âshouldâ be recommended ânowâ.
Thanks for the comment â we appreciate the suggestions!
With respect to your first suggestion, I want to clarify that our goal with this project is to identify evaluators that recommend among the most cost-effective opportunities in each cause area according to a sufficiently plausible worldview. This means among our recommendations we donât have a view about which is more cost-effective, and we donât try to rank the evaluators that we donât choose to rely on. That said, I can think of two resources that might somewhat address your suggestion:
This section on our 2024 evaluating evaluators page explains which programs have changed status following our 2024 evaluations and why
In the other supported programs section of our donation platform, we roughly order the programs based on our preliminary impression of which might be most interesting to impact-focused donors in each cause area. To do this we take into account factors like if weâve previously recommended them and if they are currently recommended by an impact-focused evaluator.
With respect to your second suggestion, while we donât include a checklist as such, we try to include the major areas for improvement in the conclusion section of each report. In future we might consider organising these more clearly and making them more prominent.