Indeed. I can speak to Founders Pledge which is another of the orgs listed here:
Founders Pledge focusing on the amount of money pledged and the amount of money donated, rather than on the impact those donations have had out in the world.
While these are the metrics we are reporting most prominently, we do of course evaluate the impact these grants are having.
Impact = money moved * average charity effectiveness. FP tracks money to their recommended charities, and this is their published research on the effectiveness of those charities, and why they recommended them.
Forward-looking estimation of a charity’s effectiveness is different from retrospective analysis of that charity’s track record / use of FP money moved.
I agree—but my impression is that they consider track record when making the forward-looking estimates, and they also update their recommendations over time, in part drawing on track record. I think “doesn’t consider track record” is a straw man, though there could be an interesting argument about whether more weight should be put on track record as opposed to other factors (e.g. intervention selection, cause selection, team quality).
I asked someone from our impact analytics team to reply here re FP, as he will be better calibrated to share what is public and what is not.
But in principle what Ben describes is correct, we have assessments of charities from our published reports (incl. judgments of partners, such as GiveWell) and we relate that to money moved. We also regularly update our assessments of charities, charities get comprehensively re-evaluated every 2 years or so, with many adjustments in between when things (funding gaps, political circumstances) .
So, this critique seems to incorrectly equate headline figure reporting with all metrics we and others are optimizing for.
Indeed. I can speak to Founders Pledge which is another of the orgs listed here:
Founders Pledge focusing on the amount of money pledged and the amount of money donated, rather than on the impact those donations have had out in the world.
While these are the metrics we are reporting most prominently, we do of course evaluate the impact these grants are having.
Thanks – does Founders Pledge publish these impact evaluations? Could you point me to an index of them, if so?
https://founderspledge.com/stories/2020-research-review-our-latest-findings-and-future-plans
Thanks… I don’t see impact evaluations of past FP money moved discussed on that page.
Are you pointing to the link out to Lewis’ animal welfare newsletter? That seems like the closest thing to an evaluation of past impact.
Impact = money moved * average charity effectiveness. FP tracks money to their recommended charities, and this is their published research on the effectiveness of those charities, and why they recommended them.
Forward-looking estimation of a charity’s effectiveness is different from retrospective analysis of that charity’s track record / use of FP money moved.
I agree—but my impression is that they consider track record when making the forward-looking estimates, and they also update their recommendations over time, in part drawing on track record. I think “doesn’t consider track record” is a straw man, though there could be an interesting argument about whether more weight should be put on track record as opposed to other factors (e.g. intervention selection, cause selection, team quality).
I feel like I’m asking about something pretty simple. Here’s a sketch:
FP recommends Charity Z
In the first year after recommending Charity Z, FP attributes $5m in donations to Charity Z because of their recommendation
The next time FP follows up with Charity Z, they ask “What did you guys use that $5m for?”
Charity Z tells them what they used the $5m for
FP thinks about this use of funds, forms an opinion about its effectiveness, and writes about this opinion in their next update of Charity Z
GiveWell basically does this for its top charities.
I asked someone from our impact analytics team to reply here re FP, as he will be better calibrated to share what is public and what is not.
But in principle what Ben describes is correct, we have assessments of charities from our published reports (incl. judgments of partners, such as GiveWell) and we relate that to money moved. We also regularly update our assessments of charities, charities get comprehensively re-evaluated every 2 years or so, with many adjustments in between when things (funding gaps, political circumstances) .
So, this critique seems to incorrectly equate headline figure reporting with all metrics we and others are optimizing for.