I agree with the main point here, and I think it’s a good one, but the headline’s use of present tense is confusing, and implies to me that they are currently doing a good job in their capacity as a donor.
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 year’s worth of effort spent on seeking funds from sources who didn’t understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isn’t just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead it’s :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
Between the Scylla of having a misleading headline and the Charybdis of typically bland EA-speak, I might want to steer the title to something like “EA dollars are far more expensive than non-EA dollars.” Also, I suspect that accepting donations from ISIS in real life would not go down well because all your bank accounts would instantly be frozen and their assets confiscated. :P
But I think this is a strong point that can be applied in many ways, and your post does a really great job of clarifying what’s previously been a vague idea for me. Some excited thoughts:
This really is a big effect! Money going to Givewell top charities is hitting the “1000x bar”; the average random dollar of spending is by definition probably around 1x. Like you say, the source of funding & manpower matters just as much.
As a new organization matures, perhaps ideally it should try to pivot away from EA funding over time. Maybe Charity Entrepreneurship’s incubation program should help equip their newly-launched charities with an aspirational “escape plan” for slowly shifting their funding model towards other sources.
I’m an aerospace engineer. When I get around to publishing various thoughts about the (little-discussed) potential intersections of EA and engineering fields, I should try to write them with an audience of engineers in mind and promote them in engineering communities, rather than writing primarily for EAs. It would be great if I could nudge some engineers to use EA-style thinking and choose marginally more socially beneficial engineering projects when making career decisions. But I definitely don’t want to peel people off of crucial AI safety research to work on even relatively-high-impact space industry projects.
Potential counterpoints:
Staying connected to the EA movement is a good way to keep organizations on track, rather than getting pulled in the directions of their alternative donors and ambient incentives. (This might relate to the many discussions about whether it’s good for EA to continue being a centralized, big-tent movement with lots of cause areas, or whether the cause areas would be better off going their separate ways.)
EA is currently a fast-growing movement, and much of that growth has happened thanks to the movement’s unique ideas summoning new funding and new workers who wouldn’t have been motivated by other, less rigorous forms of charity. This positive-sum nature might complicate attempts to analyze EA from a perspective of substitutability and crowding-out effects. Better for the EA movement to be out there growing and converting people to do the most total good, rather than taking its current size as fixed and focusing on optimizing the ratio of good done per resource spent.
This idea might interact with the “talent vs funding constraint” debate in a complex way. If we are flush with cash but short on people, we could correct for that by bankrolling external EA-adjacent organizations (places like the progress studies movement or the Union of Concerned Scientists), thereby conserving EA-manhours. If the situation were reversed, we should focus more on 80K-style career advice, hoping to place people in careers where they can do EA work on somebody else’s dime.
Psychologically, perhaps it is not good to freak people out even more about potentially “wasting EA dollars/manhours” in what is already such an uncertain and hits-based business where we are trying to encourage people to be ambitious and take shots-on-goal. So it would be good to work out the consequences of the ideas here and give people a full accounting, lest a little knowledge lead people astray. (For comparison, consider the discussions on “replaceability” in career searches—early on, people thought that replaceability meant that your counterfactual impact was much lower than the naïve estimate of taking 100% credit for all the work you get done at your job. But subsequent more complex thought argued that actually, when you add up all the secondary effects, you get back to being able to take ~100% credit.)
Do we have a name for this effect, where interventions that demand more value-aligned people/dollars/etc are more expensive to the movement? To some extent this overlaps with how people talk about “leverage” and “effectiveness”, but it’s definitely a distinct idea. Perhaps “EA intensiveness” would work?
This is a nice idea. There’ll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so it’s likely better to target philanthropists who mean well but haven’t heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.
I agree with the main point here, and I think it’s a good one, but the headline’s use of present tense is confusing, and implies to me that they are currently doing a good job in their capacity as a donor.
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 year’s worth of effort spent on seeking funds from sources who didn’t understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isn’t just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead it’s :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
Between the Scylla of having a misleading headline and the Charybdis of typically bland EA-speak, I might want to steer the title to something like “EA dollars are far more expensive than non-EA dollars.” Also, I suspect that accepting donations from ISIS in real life would not go down well because all your bank accounts would instantly be frozen and their assets confiscated. :P
But I think this is a strong point that can be applied in many ways, and your post does a really great job of clarifying what’s previously been a vague idea for me. Some excited thoughts:
This really is a big effect! Money going to Givewell top charities is hitting the “1000x bar”; the average random dollar of spending is by definition probably around 1x. Like you say, the source of funding & manpower matters just as much.
As a new organization matures, perhaps ideally it should try to pivot away from EA funding over time. Maybe Charity Entrepreneurship’s incubation program should help equip their newly-launched charities with an aspirational “escape plan” for slowly shifting their funding model towards other sources.
I’m an aerospace engineer. When I get around to publishing various thoughts about the (little-discussed) potential intersections of EA and engineering fields, I should try to write them with an audience of engineers in mind and promote them in engineering communities, rather than writing primarily for EAs. It would be great if I could nudge some engineers to use EA-style thinking and choose marginally more socially beneficial engineering projects when making career decisions. But I definitely don’t want to peel people off of crucial AI safety research to work on even relatively-high-impact space industry projects.
Potential counterpoints:
Staying connected to the EA movement is a good way to keep organizations on track, rather than getting pulled in the directions of their alternative donors and ambient incentives. (This might relate to the many discussions about whether it’s good for EA to continue being a centralized, big-tent movement with lots of cause areas, or whether the cause areas would be better off going their separate ways.)
EA is currently a fast-growing movement, and much of that growth has happened thanks to the movement’s unique ideas summoning new funding and new workers who wouldn’t have been motivated by other, less rigorous forms of charity. This positive-sum nature might complicate attempts to analyze EA from a perspective of substitutability and crowding-out effects. Better for the EA movement to be out there growing and converting people to do the most total good, rather than taking its current size as fixed and focusing on optimizing the ratio of good done per resource spent.
This idea might interact with the “talent vs funding constraint” debate in a complex way. If we are flush with cash but short on people, we could correct for that by bankrolling external EA-adjacent organizations (places like the progress studies movement or the Union of Concerned Scientists), thereby conserving EA-manhours. If the situation were reversed, we should focus more on 80K-style career advice, hoping to place people in careers where they can do EA work on somebody else’s dime.
Psychologically, perhaps it is not good to freak people out even more about potentially “wasting EA dollars/manhours” in what is already such an uncertain and hits-based business where we are trying to encourage people to be ambitious and take shots-on-goal. So it would be good to work out the consequences of the ideas here and give people a full accounting, lest a little knowledge lead people astray. (For comparison, consider the discussions on “replaceability” in career searches—early on, people thought that replaceability meant that your counterfactual impact was much lower than the naïve estimate of taking 100% credit for all the work you get done at your job. But subsequent more complex thought argued that actually, when you add up all the secondary effects, you get back to being able to take ~100% credit.)
Do we have a name for this effect, where interventions that demand more value-aligned people/dollars/etc are more expensive to the movement? To some extent this overlaps with how people talk about “leverage” and “effectiveness”, but it’s definitely a distinct idea. Perhaps “EA intensiveness” would work?
Downvoted for clickbait.
Likewise, but only weakly. The discussion in the comments seems good and the point (less provocatively expressed) is worthwhile.
An under-appreciated benefit of OpenPhil as a funder: they are much less likely than ISIS to use the information thus gained to murder me.
This is a nice idea. There’ll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so it’s likely better to target philanthropists who mean well but haven’t heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.