Thanks for writing this up! One problem with this proposal that I didn’t see flagged (but may have missed) is that if the ETG donors defer to the megadonors you don’t actually get a diversified donor base. I earn enough to be a mid-sized donor, but I would be somewhat hesitant about funding an org that I know OpenPhil has passed up on/decided to stop funding, unless I understood the reasons why and felt comfortable disagreeing with them. This is both because of fear of unilateralist curse/downside risks, and because I broadly expect them to have spent more time than me and thought harder about the problem. I think there’s a bunch of ways this is bad reasoning, grantmaker time is scarce and they may pass up on a bunch of good grants due to lack of time/information/noise, but it would definitely give me pause.
If I were giving specifically within technical AI Safety (my area of expertise), I’d feel this less strongly, but still feel it a bit, and I imagine most mid-sized donors wouldn’t have expertise in any EA cause area.
OP doesn’t have the capacity to evaluate everything, so there are things they don’t fund that are still quite good.
Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.
I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.
OP doesn’t have the capacity to evaluate everything, so there are things they don’t fund that are still quite good.
Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.
IMO, these both seem like reasons for more people to work at OP on technical grant making more than reasons for Neel to work part time on grant making with his money.
Why not both? I assume OP is fixing their capacity issues as fast as they can, but there still will be capacity issues remaining. IMO Neel still would add something here that is worth his marginal time, especially given Neel’s significant involvement, expertise, and networks.
I think it’s worth considering. My guess is that doing so would not necessarily be very time consuming. Could also be interested for them to pool donations to limit the number of people who need to do it, form a giving circle, or donate to a fund (e.g., EA Funds).
I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.
I’d be curious to hear more about this—naively, if I’m funding an org, and then OpenPhil stops funding that org, that’s a fairly strong signal to me that I should also stop funding it, knowing nothing more. (since it implies OpenPhil put in enough effort to evaluate the org, and decided to deviate from the path of least resistance)
Agreed re funding things without a track record, that seems clearly good for small donors to do, eg funding people to do independent research or start a small new research group, if you believe they’re promising
I’ve found that if a funder or donor asks, (and they are known in the community,) most funders are happy to privately respond about whether they decided against funding someone, and often why, or at least that they think it is not a good idea and they are opposed rather than just not interested.
Thanks Neel, I get the issue in general, but I’m a bit confused about what exactly the crux really is here for you?
I would have thought you would be in one of the best positions of anyone to donate to an AI org—you are fully immersed in the field and I would have thought in a good position to fund things you think are promising in on the margins, perhaps even new and exciting things that AI funds may miss?
Our of interest why aren’t you giving a decent chunk away at the moment? Feel free not to answer if you aren’t comfortable with it!
I’m in a relatively similar position to Neel. I think technical AI safety grant makers typically know way more than me about what is promising to fund. There is a bunch of non-technical info which is very informative for knowing whether a grant is good (what do current marginal grants look like, what are the downside risks, is there private info on the situation which makes things seem sketchier, etc.) and grant makers are generally in a better position than I am to evaluate this stuff.
The limiting factor [in technical ai safety funding] is in having enough technical grant makers, not in having enough organizational diversity among grantmakers (at least at current margins).
If OpenPhil felt more saturated on technical AI grant makers, then I would feel like starting new orgs pursing different funding strategies for technical AI safety could look considerably better than just having more people work at grant making at OpenPhil.
That said, note that I tend to agree to reasonable extent with the technical takes at OpenPhil on AI safety. If I heavily disagreed, I might think starting new orgs looks pretty good.
I disagree with this (except unilateralist curse), because I suspect something like the efficient market hypothesis plays out when you many medium-small donors. I think it’s suspect that one wouldn’t make the same argument as the above for the for-profit economy.
My claim is that your intuitions are the opposite of what they would be if applied to the for-profit economy. You’re response (if I understand correctly) is questioning the veracity of the analogy—which seems not to really get at the heart of the efficient market heuristic. I.e. you haven’t claimed that bigger donors are more likely to be efficient, you’ve just claimed efficiency in charitable markets are generally unlikely?
Besides this, shorting isn’t the only way markets regulate (or deflate) prices. “Selling” is the more common pathway. In this context, “selling” would be medium donors changing their donation to a more neglected/effective charity. It could be argued, this is more likely to happen under a dynamic donation “marketplace”, with lot’s of medium donors, than in a less dynamic, fewer but bigger donors, donation “marketplace”
Ah, gotcha. If I understand correctly you’re arguing for more of a “wisdom of the crowds” analogy? Many donors is better than a few donors.
If so, I agree with that, but think the major disanalogy is that the big donors are professionals, with more time experience and context, while small donors are not—big donors are more like hedge funds, small donors are more like retail investors in the efficient market analogy
Thanks for pointing that out, Neel. It is also worth having in mind that GWWC’s donations are concentraded in a few dozens of donours:
Less than 1% of our [GWW] donors account for 50% of our recorded donations.
Given the donations per donor are so heavy-tailed, it is very hard to avoid organisations being mostly supported by a few big donors. In addition, GWWC recommends donating to funds for most people:
For most people, we recommend donating through an expert-led fund that is focused on effectiveness.
I agree with this. Personally, I have engaged a significant time with EA-related matters, but continue to donate to the Long-Term Future Fund (LTFF) because I do not have a good grasp about which opportunities are best within AI safety, even though I have opinions about which cause areas are more pressing (I also rate animal welfare quite highly).
I am more positive about people working on cause area A to decide on which interventions are most effective within A (e.g. you donating to AI safety interventions). However, people earning to give may well not be familiar with any cause area, and it is unclear whether the opportunity cost to get quite familiar would be worth it, so I think it makes sense to defer.
On the other hand, I believe it is important for donors to push funds to be more transparent about their evaluation process. One way to do this is donating to more transparent funds, but another is donating directly to organisations.
Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what’s funded, and this reason is one of the better cases why it would.
I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.
Thanks for writing this up! One problem with this proposal that I didn’t see flagged (but may have missed) is that if the ETG donors defer to the megadonors you don’t actually get a diversified donor base. I earn enough to be a mid-sized donor, but I would be somewhat hesitant about funding an org that I know OpenPhil has passed up on/decided to stop funding, unless I understood the reasons why and felt comfortable disagreeing with them. This is both because of fear of unilateralist curse/downside risks, and because I broadly expect them to have spent more time than me and thought harder about the problem. I think there’s a bunch of ways this is bad reasoning, grantmaker time is scarce and they may pass up on a bunch of good grants due to lack of time/information/noise, but it would definitely give me pause.
If I were giving specifically within technical AI Safety (my area of expertise), I’d feel this less strongly, but still feel it a bit, and I imagine most mid-sized donors wouldn’t have expertise in any EA cause area.
OP doesn’t have the capacity to evaluate everything, so there are things they don’t fund that are still quite good.
Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.
I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.
Maybe there should be some way for OP to publicize what they don’t evaluate, so others can avoid the adverse selection.
IMO, these both seem like reasons for more people to work at OP on technical grant making more than reasons for Neel to work part time on grant making with his money.
Why not both? I assume OP is fixing their capacity issues as fast as they can, but there still will be capacity issues remaining. IMO Neel still would add something here that is worth his marginal time, especially given Neel’s significant involvement, expertise, and networks.
The underlying claim is that many people with technical expertise should do part time grant making?
This seems possible to me, but a bit unlikely.
I think it’s worth considering. My guess is that doing so would not necessarily be very time consuming. Could also be interested for them to pool donations to limit the number of people who need to do it, form a giving circle, or donate to a fund (e.g., EA Funds).
I’d be curious to hear more about this—naively, if I’m funding an org, and then OpenPhil stops funding that org, that’s a fairly strong signal to me that I should also stop funding it, knowing nothing more. (since it implies OpenPhil put in enough effort to evaluate the org, and decided to deviate from the path of least resistance)
Agreed re funding things without a track record, that seems clearly good for small donors to do, eg funding people to do independent research or start a small new research group, if you believe they’re promising
I’ve found that if a funder or donor asks, (and they are known in the community,) most funders are happy to privately respond about whether they decided against funding someone, and often why, or at least that they think it is not a good idea and they are opposed rather than just not interested.
Thanks Neel, I get the issue in general, but I’m a bit confused about what exactly the crux really is here for you?
I would have thought you would be in one of the best positions of anyone to donate to an AI org—you are fully immersed in the field and I would have thought in a good position to fund things you think are promising in on the margins, perhaps even new and exciting things that AI funds may miss?
Our of interest why aren’t you giving a decent chunk away at the moment? Feel free not to answer if you aren’t comfortable with it!
I’m in a relatively similar position to Neel. I think technical AI safety grant makers typically know way more than me about what is promising to fund. There is a bunch of non-technical info which is very informative for knowing whether a grant is good (what do current marginal grants look like, what are the downside risks, is there private info on the situation which makes things seem sketchier, etc.) and grant makers are generally in a better position than I am to evaluate this stuff.
The limiting factor [in technical ai safety funding] is in having enough technical grant makers, not in having enough organizational diversity among grantmakers (at least at current margins).
If OpenPhil felt more saturated on technical AI grant makers, then I would feel like starting new orgs pursing different funding strategies for technical AI safety could look considerably better than just having more people work at grant making at OpenPhil.
That said, note that I tend to agree to reasonable extent with the technical takes at OpenPhil on AI safety. If I heavily disagreed, I might think starting new orgs looks pretty good.
I disagree with this (except unilateralist curse), because I suspect something like the efficient market hypothesis plays out when you many medium-small donors. I think it’s suspect that one wouldn’t make the same argument as the above for the for-profit economy.
I disagree, because you can’t short a charity, so there’s no way for overhyped charity “prices” to go down
My claim is that your intuitions are the opposite of what they would be if applied to the for-profit economy. You’re response (if I understand correctly) is questioning the veracity of the analogy—which seems not to really get at the heart of the efficient market heuristic. I.e. you haven’t claimed that bigger donors are more likely to be efficient, you’ve just claimed efficiency in charitable markets are generally unlikely?
Besides this, shorting isn’t the only way markets regulate (or deflate) prices. “Selling” is the more common pathway. In this context, “selling” would be medium donors changing their donation to a more neglected/effective charity. It could be argued, this is more likely to happen under a dynamic donation “marketplace”, with lot’s of medium donors, than in a less dynamic, fewer but bigger donors, donation “marketplace”
Ah, gotcha. If I understand correctly you’re arguing for more of a “wisdom of the crowds” analogy? Many donors is better than a few donors.
If so, I agree with that, but think the major disanalogy is that the big donors are professionals, with more time experience and context, while small donors are not—big donors are more like hedge funds, small donors are more like retail investors in the efficient market analogy
Thanks for pointing that out, Neel. It is also worth having in mind that GWWC’s donations are concentraded in a few dozens of donours:
Given the donations per donor are so heavy-tailed, it is very hard to avoid organisations being mostly supported by a few big donors. In addition, GWWC recommends donating to funds for most people:
I agree with this. Personally, I have engaged a significant time with EA-related matters, but continue to donate to the Long-Term Future Fund (LTFF) because I do not have a good grasp about which opportunities are best within AI safety, even though I have opinions about which cause areas are more pressing (I also rate animal welfare quite highly).
I am more positive about people working on cause area A to decide on which interventions are most effective within A (e.g. you donating to AI safety interventions). However, people earning to give may well not be familiar with any cause area, and it is unclear whether the opportunity cost to get quite familiar would be worth it, so I think it makes sense to defer.
On the other hand, I believe it is important for donors to push funds to be more transparent about their evaluation process. One way to do this is donating to more transparent funds, but another is donating directly to organisations.
Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what’s funded, and this reason is one of the better cases why it would.
I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.
Yeah, that intermediate world sounds great to me! (though a lot of effort, alas)