But if you were to donate $1,000 to CHAI, then either:
1. You expand CHAI’s available funding by $1,000. The cost-effectiveness of this grant should be basically the same as the final $1,000 that Open Philanthropy donated.
2. Or Open Philanthropy donates $1,000 less to CHAI in their next funding round. In this case you’ve been ‘funged’ by Open Philanthropy. But then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket.
In reality, some combination of the two probably happens. But either way, the effectiveness of your donation is about the same as marginal donations made by Open Philanthropy.
I am much more pessimistic about both cases.
In case 1, room for more funding estimates likely indicate estimates for thresholds for fairly steeply diminishing returns or even barriers to expansion, e.g. they don’t expect to find another individual worth hiring or nearly as good as their last hire, or have the capacity to manage them. If Open Phil is aiming to make sure this threshold is always met or overshot slightly (which they should do, but might not be doing; I don’t know; they could also follow up with orgs earlier to fill in missing gaps), then additional funding will have much worse returns, or just roll into future expenses which could have been paid for with future grants or donations, but those will just displace more future grants/donations, and so on. In the case where Open Phil overshot, that would speak poorly for the cost-effectiveness of the last $, so we shouldn’t be too happy about matching that.
In case 2, Open Phil might not find anywhere good enough to grant that extra $1,000, or it won’t otherwise be used soon and will just offset their own or others’ future donations/grants. There is likely a reasonably large gap between the organizations they grant to and those they looked into but didn’t grant to in terms of marginal cost-effectiveness (likely smallest in global health and development, at around 5x-10x, because of the size of the field and them not filling GiveDirectly’s funding gap), since otherwise they would have made more grants. The fact that Open Phil has been granting <2% of its endowment yearly despite aiming to spend it all within the founders’ lifetimes is a sign that they would not find additional similarly cost-effective opportunities. It’s not for lack of trying, and it’s not that they don’t have the money to hire more researchers, either. The reason Open Phil isn’t granting to more organizations is likely because there’s a big gap in expected cost-effectiveness between its last grants and the next best ones it decided to not do.
At some point, if an organization is funded enough without Open Phil, Open Phil might spend less time evaluating it, and have more time to evaluate others, but this seems unlikely, given that Open Phil’s grants usually makes up most of EA charities’ budgets.
There are no sharp cut offs—just gradually diminishing returns.
An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.
You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.
But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.
I think what’s happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).
There are no sharp cut offs—just gradually diminishing returns.
An org can pretty much always find a way to spend 1% more money and have a bit more impact.
The marginal impact can be much smaller, but this depends on the particulars. I think hiring is the most important example, especially in cases where salaries make up almost all of the costs of the organization. Suppose a research organization hired everyone they thought was worth hiring at all (with their current management capacity as a barrier, or based on producing more than they cost managers, or based on whether they will set the org in a worse direction, etc.). Or, the difference between their last hire and their next hire could also be large. How would they spend an extra 1% similarly cost-effectively? I think you should expect a big drop in marginal cost-effectiveness here.
Maybe in many cases there are part-time workers you can get more hours from by paying them more.
And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.
I think my hiring example could generalize to cause areas where the output is primarily research and the costs are primarily income. E.g., everyone we’d identify to do more good than harm in AI safety research in expectation could already be funded (although maybe they could continue to use more compute cost-effectively?). The same could be true for grantmakers. Maybe we can just always hire more people who aren’t counterproductive in expectation, and the drop is just steep, and that’s fine since the stakes are astronomical.
You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.
But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.
I think what’s happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).
I agree with this for global health and poverty, but I expect the drop in cost-effectiveness to be much worse in the other big EA cause areas and especially in organizations where the vast majority of spending is on salaries.
I am much more pessimistic about both cases.
In case 1, room for more funding estimates likely indicate estimates for thresholds for fairly steeply diminishing returns or even barriers to expansion, e.g. they don’t expect to find another individual worth hiring or nearly as good as their last hire, or have the capacity to manage them. If Open Phil is aiming to make sure this threshold is always met or overshot slightly (which they should do, but might not be doing; I don’t know; they could also follow up with orgs earlier to fill in missing gaps), then additional funding will have much worse returns, or just roll into future expenses which could have been paid for with future grants or donations, but those will just displace more future grants/donations, and so on. In the case where Open Phil overshot, that would speak poorly for the cost-effectiveness of the last $, so we shouldn’t be too happy about matching that.
In case 2, Open Phil might not find anywhere good enough to grant that extra $1,000, or it won’t otherwise be used soon and will just offset their own or others’ future donations/grants. There is likely a reasonably large gap between the organizations they grant to and those they looked into but didn’t grant to in terms of marginal cost-effectiveness (likely smallest in global health and development, at around 5x-10x, because of the size of the field and them not filling GiveDirectly’s funding gap), since otherwise they would have made more grants. The fact that Open Phil has been granting <2% of its endowment yearly despite aiming to spend it all within the founders’ lifetimes is a sign that they would not find additional similarly cost-effective opportunities. It’s not for lack of trying, and it’s not that they don’t have the money to hire more researchers, either. The reason Open Phil isn’t granting to more organizations is likely because there’s a big gap in expected cost-effectiveness between its last grants and the next best ones it decided to not do.
At some point, if an organization is funded enough without Open Phil, Open Phil might spend less time evaluating it, and have more time to evaluate others, but this seems unlikely, given that Open Phil’s grants usually makes up most of EA charities’ budgets.
There are no sharp cut offs—just gradually diminishing returns.
An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.
You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.
But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.
I think what’s happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).
The marginal impact can be much smaller, but this depends on the particulars. I think hiring is the most important example, especially in cases where salaries make up almost all of the costs of the organization. Suppose a research organization hired everyone they thought was worth hiring at all (with their current management capacity as a barrier, or based on producing more than they cost managers, or based on whether they will set the org in a worse direction, etc.). Or, the difference between their last hire and their next hire could also be large. How would they spend an extra 1% similarly cost-effectively? I think you should expect a big drop in marginal cost-effectiveness here.
Maybe in many cases there are part-time workers you can get more hours from by paying them more.
I think my hiring example could generalize to cause areas where the output is primarily research and the costs are primarily income. E.g., everyone we’d identify to do more good than harm in AI safety research in expectation could already be funded (although maybe they could continue to use more compute cost-effectively?). The same could be true for grantmakers. Maybe we can just always hire more people who aren’t counterproductive in expectation, and the drop is just steep, and that’s fine since the stakes are astronomical.
I agree with this for global health and poverty, but I expect the drop in cost-effectiveness to be much worse in the other big EA cause areas and especially in organizations where the vast majority of spending is on salaries.