ETA a TL;DR—it may lie in using relatively small amounts of EA funding to counterfactually multiply the positive effect of non-EA resources, or to counterfactually move substantial non-EA funding toward much more effective charities (even if not GiveWell’s best).
What is the ToC for meta Global Health work?
It could lie in a few places. As an example, one could provide very low operational funding to student volunteer-led organizations. Having even a small external budget can be a real force multiplier for a student organization, making existing resources (e.g., student volunteer time, access to campus resources, access to a population reflecting on its values with time to hear a good speaker) significantly more effective.
Drawing on my own life, I went to something like an Oxfam Hunger Banquet as an option toward fulfilling requirements for the freshman seminar class in college. I think that event had a meaningful effect on my own views about effectiveness and global priorities. If one could counterfactually give a similar, even mildly-EA flavored experience to college freshmen for a few dollars each, I speculate that the ROI would be quite good (e.g., in promoting effective giving). That only works if the funding acts as a force multiplier—you’d need many of the inputs to be provided for “free” by non-EA sources. But as in my Hunger Banquet example, I don’t think that is necessarily implausible.
** Find people who can start new direct charities?** . . . . To the best of my knowledge, I don’t think that new GHW charities have had much luck beating the best GiveWell charities (by a GiveWell-type view’s lights).
I don’t think we should assume that the new charities will only donations from EA sources. If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity. Many potential donors are pre-committed to a specific subfield (e.g., mental health), or find diffuse interventions like bednets unappealing for whatever reasons. So their dollars were never in play for GiveWell top charities anyway.
In addition to providing startup funds, one could argue for funding a meta organization that—e.g., -- helps carefully selected 98th-percentile-effectiveness organizations write convincing grant pitches to governments and non-EA foundations. I guess that comes back to force multipliers too—it’s not very effective to fund these organizations’ operating expenses on a long term basis, but the right strategic investments might help them leverage enough non-EA monies to create a really good ROI.
I haven’t come across any good non-EA GHD student groups. Remember that they need to beat the bar of current uni EA groups (that can get funding from Open Phil) from a GHD perspective—which I think is somewhat of a high bar.
If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity.
I don’t think this reasoning checks out. GiveWell interventions also get lots of money from non-EA sources (e.g. AMF). It might be the case that top GiveWell charities are unusually hard to fundraise for from non-EA sources relative to 98% charities, though I’m not sure why that would be the case, and a 98th% intervention could end up being much less cost-effective in real terms.
I’m a grant writer and fundraiser by trade, but in the past I haven’t provided services to any charities that were affiliated with EA or met GiveWell’s effectiveness standards. They’re mostly the typical single-cause, single-location organizations run by people who really mean well but are running on emotion or “faith” alone. These are good people who just aren’t used to using an effective lens, even using much more conventional program evaluation methods.
There’s only so much I can do as an independent worker in this field, but I do like the idea of selecting those 98th percentile orgs you mentioned and am intrigued by the approach of applying a small amount of EA money to them (epistemic status: uncertain, ~40%).
My concern would be that such organizations would only be tangentially aligned with EA values, and so essentially EA Infrastructure would be funding organizations with very different values, which I don’t think matches EA’s core vision.
Of course, I’m still new to the movement, so I don’t really feel all that comfortable speaking definitively about this.
(Epistemic status: speculative)
ETA a TL;DR—it may lie in using relatively small amounts of EA funding to counterfactually multiply the positive effect of non-EA resources, or to counterfactually move substantial non-EA funding toward much more effective charities (even if not GiveWell’s best).
It could lie in a few places. As an example, one could provide very low operational funding to student volunteer-led organizations. Having even a small external budget can be a real force multiplier for a student organization, making existing resources (e.g., student volunteer time, access to campus resources, access to a population reflecting on its values with time to hear a good speaker) significantly more effective.
Drawing on my own life, I went to something like an Oxfam Hunger Banquet as an option toward fulfilling requirements for the freshman seminar class in college. I think that event had a meaningful effect on my own views about effectiveness and global priorities. If one could counterfactually give a similar, even mildly-EA flavored experience to college freshmen for a few dollars each, I speculate that the ROI would be quite good (e.g., in promoting effective giving). That only works if the funding acts as a force multiplier—you’d need many of the inputs to be provided for “free” by non-EA sources. But as in my Hunger Banquet example, I don’t think that is necessarily implausible.
I don’t think we should assume that the new charities will only donations from EA sources. If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity. Many potential donors are pre-committed to a specific subfield (e.g., mental health), or find diffuse interventions like bednets unappealing for whatever reasons. So their dollars were never in play for GiveWell top charities anyway.
In addition to providing startup funds, one could argue for funding a meta organization that—e.g., -- helps carefully selected 98th-percentile-effectiveness organizations write convincing grant pitches to governments and non-EA foundations. I guess that comes back to force multipliers too—it’s not very effective to fund these organizations’ operating expenses on a long term basis, but the right strategic investments might help them leverage enough non-EA monies to create a really good ROI.
I haven’t come across any good non-EA GHD student groups. Remember that they need to beat the bar of current uni EA groups (that can get funding from Open Phil) from a GHD perspective—which I think is somewhat of a high bar.
I don’t think this reasoning checks out. GiveWell interventions also get lots of money from non-EA sources (e.g. AMF). It might be the case that top GiveWell charities are unusually hard to fundraise for from non-EA sources relative to 98% charities, though I’m not sure why that would be the case, and a 98th% intervention could end up being much less cost-effective in real terms.
I’m a grant writer and fundraiser by trade, but in the past I haven’t provided services to any charities that were affiliated with EA or met GiveWell’s effectiveness standards. They’re mostly the typical single-cause, single-location organizations run by people who really mean well but are running on emotion or “faith” alone. These are good people who just aren’t used to using an effective lens, even using much more conventional program evaluation methods.
There’s only so much I can do as an independent worker in this field, but I do like the idea of selecting those 98th percentile orgs you mentioned and am intrigued by the approach of applying a small amount of EA money to them (epistemic status: uncertain, ~40%).
My concern would be that such organizations would only be tangentially aligned with EA values, and so essentially EA Infrastructure would be funding organizations with very different values, which I don’t think matches EA’s core vision.
Of course, I’m still new to the movement, so I don’t really feel all that comfortable speaking definitively about this.