In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.
In addition to Khorton’s points in a sibling comment, GiveWell explicitly optimizes not just for expected value by their own lights, but for transparency/replicability of reasoning according to certain standards of evidence. If your donors are willing to be “highly engaged” or trust you a lot, or if they have different epistemics from GiveWell (e.g., if they put relatively more weight on models of root-level causes of poverty/underdevelopment, compared to RCTs), I bet there’s something else out there that they would think is higher expected value.
Of course, finding and vetting that thing is still a problem, so it’s possible that the thoroughness and quality of GW’s research outweighs these points, but it’s worth considering.