I agree with the main point here, and I think itâs a good one, but the headlineâs use of present tense is confusing, and implies to me that they are currently doing a good job in their capacity as a donor.
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 yearâs worth of effort spent on seeking funds from sources who didnât understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isnât just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead itâs :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
Between the Scylla of having a misleading headline and the Charybdis of typically bland EA-speak, I might want to steer the title to something like âEA dollars are far more expensive than non-EA dollars.â Also, I suspect that accepting donations from ISIS in real life would not go down well because all your bank accounts would instantly be frozen and their assets confiscated. :P
But I think this is a strong point that can be applied in many ways, and your post does a really great job of clarifying whatâs previously been a vague idea for me. Some excited thoughts:
This really is a big effect! Money going to Givewell top charities is hitting the â1000x barâ; the average random dollar of spending is by definition probably around 1x. Like you say, the source of funding & manpower matters just as much.
As a new organization matures, perhaps ideally it should try to pivot away from EA funding over time. Maybe Charity Entrepreneurshipâs incubation program should help equip their newly-launched charities with an aspirational âescape planâ for slowly shifting their funding model towards other sources.
Iâm an aerospace engineer. When I get around to publishing various thoughts about the (little-discussed) potential intersections of EA and engineering fields, I should try to write them with an audience of engineers in mind and promote them in engineering communities, rather than writing primarily for EAs. It would be great if I could nudge some engineers to use EA-style thinking and choose marginally more socially beneficial engineering projects when making career decisions. But I definitely donât want to peel people off of crucial AI safety research to work on even relatively-high-impact space industry projects.
Potential counterpoints:
Staying connected to the EA movement is a good way to keep organizations on track, rather than getting pulled in the directions of their alternative donors and ambient incentives. (This might relate to the many discussions about whether itâs good for EA to continue being a centralized, big-tent movement with lots of cause areas, or whether the cause areas would be better off going their separate ways.)
EA is currently a fast-growing movement, and much of that growth has happened thanks to the movementâs unique ideas summoning new funding and new workers who wouldnât have been motivated by other, less rigorous forms of charity. This positive-sum nature might complicate attempts to analyze EA from a perspective of substitutability and crowding-out effects. Better for the EA movement to be out there growing and converting people to do the most total good, rather than taking its current size as fixed and focusing on optimizing the ratio of good done per resource spent.
This idea might interact with the âtalent vs funding constraintâ debate in a complex way. If we are flush with cash but short on people, we could correct for that by bankrolling external EA-adjacent organizations (places like the progress studies movement or the Union of Concerned Scientists), thereby conserving EA-manhours. If the situation were reversed, we should focus more on 80K-style career advice, hoping to place people in careers where they can do EA work on somebody elseâs dime.
Psychologically, perhaps it is not good to freak people out even more about potentially âwasting EA dollars/âmanhoursâ in what is already such an uncertain and hits-based business where we are trying to encourage people to be ambitious and take shots-on-goal. So it would be good to work out the consequences of the ideas here and give people a full accounting, lest a little knowledge lead people astray. (For comparison, consider the discussions on âreplaceabilityâ in career searchesâearly on, people thought that replaceability meant that your counterfactual impact was much lower than the naĂŻve estimate of taking 100% credit for all the work you get done at your job. But subsequent more complex thought argued that actually, when you add up all the secondary effects, you get back to being able to take ~100% credit.)
Do we have a name for this effect, where interventions that demand more value-aligned people/âdollars/âetc are more expensive to the movement? To some extent this overlaps with how people talk about âleverageâ and âeffectivenessâ, but itâs definitely a distinct idea. Perhaps âEA intensivenessâ would work?
This is a nice idea. Thereâll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so itâs likely better to target philanthropists who mean well but havenât heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.
I agree with the main point here, and I think itâs a good one, but the headlineâs use of present tense is confusing, and implies to me that they are currently doing a good job in their capacity as a donor.
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 yearâs worth of effort spent on seeking funds from sources who didnât understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isnât just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead itâs :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
Between the Scylla of having a misleading headline and the Charybdis of typically bland EA-speak, I might want to steer the title to something like âEA dollars are far more expensive than non-EA dollars.â Also, I suspect that accepting donations from ISIS in real life would not go down well because all your bank accounts would instantly be frozen and their assets confiscated. :P
But I think this is a strong point that can be applied in many ways, and your post does a really great job of clarifying whatâs previously been a vague idea for me. Some excited thoughts:
This really is a big effect! Money going to Givewell top charities is hitting the â1000x barâ; the average random dollar of spending is by definition probably around 1x. Like you say, the source of funding & manpower matters just as much.
As a new organization matures, perhaps ideally it should try to pivot away from EA funding over time. Maybe Charity Entrepreneurshipâs incubation program should help equip their newly-launched charities with an aspirational âescape planâ for slowly shifting their funding model towards other sources.
Iâm an aerospace engineer. When I get around to publishing various thoughts about the (little-discussed) potential intersections of EA and engineering fields, I should try to write them with an audience of engineers in mind and promote them in engineering communities, rather than writing primarily for EAs. It would be great if I could nudge some engineers to use EA-style thinking and choose marginally more socially beneficial engineering projects when making career decisions. But I definitely donât want to peel people off of crucial AI safety research to work on even relatively-high-impact space industry projects.
Potential counterpoints:
Staying connected to the EA movement is a good way to keep organizations on track, rather than getting pulled in the directions of their alternative donors and ambient incentives. (This might relate to the many discussions about whether itâs good for EA to continue being a centralized, big-tent movement with lots of cause areas, or whether the cause areas would be better off going their separate ways.)
EA is currently a fast-growing movement, and much of that growth has happened thanks to the movementâs unique ideas summoning new funding and new workers who wouldnât have been motivated by other, less rigorous forms of charity. This positive-sum nature might complicate attempts to analyze EA from a perspective of substitutability and crowding-out effects. Better for the EA movement to be out there growing and converting people to do the most total good, rather than taking its current size as fixed and focusing on optimizing the ratio of good done per resource spent.
This idea might interact with the âtalent vs funding constraintâ debate in a complex way. If we are flush with cash but short on people, we could correct for that by bankrolling external EA-adjacent organizations (places like the progress studies movement or the Union of Concerned Scientists), thereby conserving EA-manhours. If the situation were reversed, we should focus more on 80K-style career advice, hoping to place people in careers where they can do EA work on somebody elseâs dime.
Psychologically, perhaps it is not good to freak people out even more about potentially âwasting EA dollars/âmanhoursâ in what is already such an uncertain and hits-based business where we are trying to encourage people to be ambitious and take shots-on-goal. So it would be good to work out the consequences of the ideas here and give people a full accounting, lest a little knowledge lead people astray. (For comparison, consider the discussions on âreplaceabilityâ in career searchesâearly on, people thought that replaceability meant that your counterfactual impact was much lower than the naĂŻve estimate of taking 100% credit for all the work you get done at your job. But subsequent more complex thought argued that actually, when you add up all the secondary effects, you get back to being able to take ~100% credit.)
Do we have a name for this effect, where interventions that demand more value-aligned people/âdollars/âetc are more expensive to the movement? To some extent this overlaps with how people talk about âleverageâ and âeffectivenessâ, but itâs definitely a distinct idea. Perhaps âEA intensivenessâ would work?
Downvoted for clickbait.
Likewise, but only weakly. The discussion in the comments seems good and the point (less provocatively expressed) is worthwhile.
An under-appreciated benefit of OpenPhil as a funder: they are much less likely than ISIS to use the information thus gained to murder me.
This is a nice idea. Thereâll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so itâs likely better to target philanthropists who mean well but havenât heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.