Giving What We Can does not conduct primary research into charities. Instead, we rely on several other organisations that do. We call these our “trusted evaluators,” and their work helps us ensure that our community members can have the biggest possible impact with their donations.
We, the Giving What We Can research team, have chosen these experts because of our subjective impression that they meet a strong standard, according to the criteria above. In 2023, we intend to do a thorough reevaluation of all our current trusted evaluators, in addition to evaluating new potential trusted evaluators.
On that page, you can find the current “trusted evaluators” according to GWWC, and at the bottom “a tentative list of additional charitable giving experts we are considering investigating in 2023″.
As for your questions, to the best of my understanding
2) This is a very deep topic that I’m not an expert in, but you might find this post useful: Measuring Good Better. Here’s the video version:
Why don’t they have a single list of recommendations
As far as I personally see it, there are two kinds of differences between evaluators:
epistemic disagreements (which hopefully are falsifiable and could in theory be resolved), for example:
What’s the actual effect of deworming programs on income years later (see the famous worm wars)
What’s the counterfactual value of subsidized cataract surgery (see your previous question)
value disagreements (where it seems unlikely we’ll every reach a consensus), for example:
How do you weigh the suffering of humans vs the suffering of other animals
How do you weigh extending a life vs improving a life
How much do you value freedom, self-determination, wellbeing, happiness
How do weigh future generations vs present individuals
3) I don’t know much about them, but until recently I think they were focusing less on impact (the results of the charities) and more on things like the organization’s transparency, overheads, and culture. By a quick skim on their website, it seems that they don’t recommend the most impactful donation opportunities, but rate charities across a range of metrics.
[Speaking in a personal capacity]
It doesn’t answer all your questions, but you might find this interesting: https://www.givingwhatwecan.org/trusted-evaluators
On that page, you can find the current “trusted evaluators” according to GWWC, and at the bottom “a tentative list of additional charitable giving experts we are considering investigating in 2023″.
As for your questions, to the best of my understanding
1) Here’s a list from @Sjir Hoeijmakers https://docs.google.com/spreadsheets/d/1OSv9vkW0UkTyOuwOnYZeFfiZT8hrF8DSvUIwKKfh95A/edit#gid=0 (You can look at the ones that are marked as “Funding opportunity supplier”). I don’t know which ones you would consider “EA-aligned”, I don’t think there’s a strong consensus on what’s “EA-aligned”
2) This is a very deep topic that I’m not an expert in, but you might find this post useful: Measuring Good Better. Here’s the video version:
As far as I personally see it, there are two kinds of differences between evaluators:
epistemic disagreements (which hopefully are falsifiable and could in theory be resolved), for example:
What’s the actual effect of deworming programs on income years later (see the famous worm wars)
What’s the counterfactual value of subsidized cataract surgery (see your previous question)
value disagreements (where it seems unlikely we’ll every reach a consensus), for example:
How do you weigh the suffering of humans vs the suffering of other animals
How do you weigh extending a life vs improving a life
How much do you value freedom, self-determination, wellbeing, happiness
How do weigh future generations vs present individuals
3) I don’t know much about them, but until recently I think they were focusing less on impact (the results of the charities) and more on things like the organization’s transparency, overheads, and culture. By a quick skim on their website, it seems that they don’t recommend the most impactful donation opportunities, but rate charities across a range of metrics.
Thank you very, very much for your input, Lorenzo! Very helpful as always. Keep up the good work!