To address this, I think itâs important to look at the value each additional layer of evaluation provides. It seems (with the multitude of evaluators and fundraisers) we are now at a point where at least some work in the second layer is necessary/âuseful, but I donât think a third layer would currently be justified (with 0-1 organisations active in the second layer).
Another way to see this is: the âturtles all the way downâ concern already works for the first layer of evaluators (why do we need one if charities are already evaluating themselves and reporting on their impact? who is evaluating these evaluators?): the relevant question is whether the layer adds enough value, which this first layer clearly does (given how many charities and donors there are and the lack of public and independent information available on how they compare), and I argue above the second does as well.
FWIW I donât think this second layer should be fully or forever centralised in GWWC, and I see some value in more fundraising organisations having at least some research capacity to determine their recommendations, but we need to start somewhere and there are diminishing returns to adding more. Relatedly, I should say that I donât expect fundraising organisations to just âlisten to whatever GWWC saysâ: we provide recommendations and guidance, and these organisations may use that to inform their choices (which is a significant improvement to having no guidance at all to choose among evaluators).
Thanks for your comment Hendrik!
To address this, I think itâs important to look at the value each additional layer of evaluation provides. It seems (with the multitude of evaluators and fundraisers) we are now at a point where at least some work in the second layer is necessary/âuseful, but I donât think a third layer would currently be justified (with 0-1 organisations active in the second layer).
Another way to see this is: the âturtles all the way downâ concern already works for the first layer of evaluators (why do we need one if charities are already evaluating themselves and reporting on their impact? who is evaluating these evaluators?): the relevant question is whether the layer adds enough value, which this first layer clearly does (given how many charities and donors there are and the lack of public and independent information available on how they compare), and I argue above the second does as well.
FWIW I donât think this second layer should be fully or forever centralised in GWWC, and I see some value in more fundraising organisations having at least some research capacity to determine their recommendations, but we need to start somewhere and there are diminishing returns to adding more. Relatedly, I should say that I donât expect fundraising organisations to just âlisten to whatever GWWC saysâ: we provide recommendations and guidance, and these organisations may use that to inform their choices (which is a significant improvement to having no guidance at all to choose among evaluators).