Hmm, I don’t think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or properly account for, and I did not feel like I could cause the applicant to understand the downside risk they have to account for without investing significant effort and time
The grant was only tenuously EA-related and seemed to have been submitted to a lot of applications relatively indiscriminately
I was unable to understand the goals, implementation or other details of the grant
I simply expected the proposed plan to not work, for a large variety of reasons. Here are some of the most frequent:
The grant was trying to achieve something highly ambitious while seeming to allocate very little resources to achieving that outcome
The grantee had a track record of work that I did not consider to be of sufficient quality to achieve what they set out to do
In some cases the applicant asked for less than our minimum grant amount of $10,000
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
In some cases the applicant asked for less than our minimum grant amount of $10,000
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant.
I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we’ve had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).
I don’t think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.
As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don’t know Tyler that well, I wasn’t sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)
The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it’s often very hard to get a good sense of the competence of an applicant.
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it’s hard to get $1k grants.
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people’s competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.
To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).
It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
Hmm, I guess it depends a bit on how you view this.
If you model this in terms of “total financial resources going to EA-aligned people”, then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.
If you want to model it as “money controlled directly by EA institutions” then it’s closer to your number.
I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
[1] I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
I suspect there’s low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you’d want each person to be located in a different geographic EA hub. (Perhaps you’d want supernetworkers covering disparate online communities devoted to EA as well.)
This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of “Fund the EA Hotel”, maybe the key intervention is “Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses.” (BTW, the EA Hotel is actually a pretty good place to be if you’re an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)
To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network.
That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:
Matt Fallshaw: Australia (but also travels a lot)
Helen Toner: Georgetown (I think)
Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
Matt Wage: New York
I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).
Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
At least for me this doesn’t really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I’ve worked with a lot of people on a lot of projects over the years. I’ve discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.
The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.
It’s not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don’t really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.
I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone’s competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
I’m pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don’t have sufficient time to vet applications from people they don’t already know—being able to do this should be a requirement of the job, IMO. Seconding Peter’s question below, I’d be keen to hear if there are any plans to make progress on this.
If you really don’t have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.
but I think EA funds should not be hiring fund managers who don’t have sufficient time to vet applications from people they don’t already know
To be clear, we did invest time into vetting applications from people we didn’t know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.
My guess is that if you don’t have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at least an additional 5, often 10, hours of effort into reaching out to them, performing interviews, getting testimonials, analyzing their case, etc. If you don’t do this, I expect the average grant to be net negative.
Our review period lasted about one month. At 100 applications, assuming that you create an anonymous review process, this would have resulted in around 250-500 hours of additional work, which would have made this the full-time job for 2-3 of the 5 people on the grant board, plus the already existing ~80 hours of overhead this grant round required from the board. You likely would have filtered out about 50 of them at an earlier stage, so you can maybe cut that in half, resulting in ~2 full-time staff for that review period.
I don’t think that level of time-investment is possible for the EA Funds, and if you make it a requirement for being on an EA Fund board, the quality of your grant decisions will go down drastically because there are very few people who have a track record of good judgement in this domain, who are not also holding other full-time jobs. That level of commitment would not be compatible with holding another full-time job, especially not in a leadership position.
I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications. I think it might make sense for us to hire a part-time staff to help with evaluations and do background research as well as interviews for us, but it’s currently unclear to me how such a person would be managed and whether their salary would be worth the benefit, but it seems like plausibly the correct choice.
Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you’ve said I do think that more time needs to be going into these grant reviews.
It don’t think it’s unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn’t your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that’s the case, then I think there’s a bigger problem (the job isn’t being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.
I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.
I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies that exist in the EA space have an even higher (funds-distributed)/staff ratio than this.
The Open Philanthropy Project has about 15-20 people assessing grants, and gives out at least 100 million dollars a year, and likely aims to give closer to a $1 billion dollars a year given their reserves.
BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year
My guess is that GiveWell also has about 10 staff assessing grants full-time, making grants of about $20 million dollars
I think at the current level of team-member-involvement, and since I do think there is a significant judgement-component to evaluating grants which allows the average LTF-Fund team member to act with higher leverage, plus the time that anyone involved in the LTF-landscape has to invest to build models and keep up to speed with recent developments, I actually think that the LTF-Fund team is able to make more comprehensive grant assessments per dollar granted than almost any other granting body in the space.
I do think that having more people who can assess grants and help distribute resources like this is key, and think that investing in training and recruiting those people should be one of the top priorities for the community at large.
BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year
Note that BERI has only existed for a little over 2 years, and their grant-making has been pretty lumpy, so I don’t think they’ve yet reached any equilibrium grant-making rate (one which could be believably expressed in terms of $X dollars / year).
I agree. Though I think I expect the ratio of funds-distributed/staff to roughly stay the same, at least for a bit, and probably go up a bit.
I think older and larger organizations will have smaller funds-distributed/staff ratios, but I think that’s mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.
I would be in favour of this fund using ~5% of its money to pay for staff costs, including a permanent secretariat. The secretariat would probably decrease pressure on grantmakers a little, and improve grant/feedback quality a little, which makes the costs seem worth it. (I know you’ve already considered this and I want to encourage it!)
I imagine the secretariat would:
-Handle the admin of opening and advertising a funding round
-Respond to many questions on the Forum, Facebook, and by email, and direct more difficult questions to the correct person
-Coordinate the writing of Forum posts like this
-Take notes on what additional information grantmakers would like from applicants, contact applicants with follow-up questions, and suggest iterations of the application form
-(potentially) Manage handover to new grantmakers when current members step down
-(potentially) Sift through applications and remove those which are obviously inappropriate for the Long Term Future Fund
-(potentially) Provide a couple of lines of fairly generic but private feedback for applicants
This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don’t think you’ll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don’t have the time.
The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it’d be too difficult to effectively accomplish in the timeframe of a few months.
In general, one issue the EA Funds face other granting bodies in EA don’t face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it’s distributed, is much more complicated than ones the CEA or a similar organization typically faces.
One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn’t a “job” in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.
Hmm, I don’t think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or properly account for, and I did not feel like I could cause the applicant to understand the downside risk they have to account for without investing significant effort and time
The grant was only tenuously EA-related and seemed to have been submitted to a lot of applications relatively indiscriminately
I was unable to understand the goals, implementation or other details of the grant
I simply expected the proposed plan to not work, for a large variety of reasons. Here are some of the most frequent:
The grant was trying to achieve something highly ambitious while seeming to allocate very little resources to achieving that outcome
The grantee had a track record of work that I did not consider to be of sufficient quality to achieve what they set out to do
In some cases the applicant asked for less than our minimum grant amount of $10,000
Thanks for the transparent answers.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we’ve had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).
I don’t think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.
As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don’t know Tyler that well, I wasn’t sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)
The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it’s often very hard to get a good sense of the competence of an applicant.
A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it’s hard to get $1k grants.
Wow! This is an order of magnitude larger than I expected. What’s the source of the overhead here?
Here is my rough fermi:
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people’s competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k
+ 0.3 * $60k) * 1.5 = $252k
per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).
It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
Hmm, I guess it depends a bit on how you view this.
If you model this in terms of “total financial resources going to EA-aligned people”, then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.
If you want to model it as “money controlled directly by EA institutions” then it’s closer to your number.
I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
[1] I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
I think you have some math errors:
$150k * 1.5 + $60k = $285k rather than $295k
Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
Ah, yes. The second one. Will update.
(moved this comment here)
I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
I suspect there’s low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you’d want each person to be located in a different geographic EA hub. (Perhaps you’d want supernetworkers covering disparate online communities devoted to EA as well.)
This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of “Fund the EA Hotel”, maybe the key intervention is “Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses.” (BTW, the EA Hotel is actually a pretty good place to be if you’re an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)
That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:
Matt Fallshaw: Australia (but also travels a lot)
Helen Toner: Georgetown (I think)
Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
Matt Wage: New York
I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).
Good to know!
Isn’t Matt in HK?
He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)
Update: I was just wrong, Matt is indeed primarily HK
Boy, there are two Matts in that list.
At least for me this doesn’t really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I’ve worked with a lot of people on a lot of projects over the years. I’ve discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.
The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.
It’s not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don’t really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.
I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone’s competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).
I’m pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don’t have sufficient time to vet applications from people they don’t already know—being able to do this should be a requirement of the job, IMO. Seconding Peter’s question below, I’d be keen to hear if there are any plans to make progress on this.
If you really don’t have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.
To be clear, we did invest time into vetting applications from people we didn’t know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.
My guess is that if you don’t have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at least an additional 5, often 10, hours of effort into reaching out to them, performing interviews, getting testimonials, analyzing their case, etc. If you don’t do this, I expect the average grant to be net negative.
Our review period lasted about one month. At 100 applications, assuming that you create an anonymous review process, this would have resulted in around 250-500 hours of additional work, which would have made this the full-time job for 2-3 of the 5 people on the grant board, plus the already existing ~80 hours of overhead this grant round required from the board. You likely would have filtered out about 50 of them at an earlier stage, so you can maybe cut that in half, resulting in ~2 full-time staff for that review period.
I don’t think that level of time-investment is possible for the EA Funds, and if you make it a requirement for being on an EA Fund board, the quality of your grant decisions will go down drastically because there are very few people who have a track record of good judgement in this domain, who are not also holding other full-time jobs. That level of commitment would not be compatible with holding another full-time job, especially not in a leadership position.
I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications. I think it might make sense for us to hire a part-time staff to help with evaluations and do background research as well as interviews for us, but it’s currently unclear to me how such a person would be managed and whether their salary would be worth the benefit, but it seems like plausibly the correct choice.
Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you’ve said I do think that more time needs to be going into these grant reviews.
It don’t think it’s unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn’t your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that’s the case, then I think there’s a bigger problem (the job isn’t being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.
I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.
I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies that exist in the EA space have an even higher
(funds-distributed)/staff
ratio than this.The Open Philanthropy Project has about 15-20 people assessing grants, and gives out at least 100 million dollars a year, and likely aims to give closer to a $1 billion dollars a year given their reserves.
BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year
My guess is that GiveWell also has about 10 staff assessing grants full-time, making grants of about $20 million dollars
I think at the current level of team-member-involvement, and since I do think there is a significant judgement-component to evaluating grants which allows the average LTF-Fund team member to act with higher leverage, plus the time that anyone involved in the LTF-landscape has to invest to build models and keep up to speed with recent developments, I actually think that the LTF-Fund team is able to make more comprehensive grant assessments per dollar granted than almost any other granting body in the space.
I do think that having more people who can assess grants and help distribute resources like this is key, and think that investing in training and recruiting those people should be one of the top priorities for the community at large.
Note that BERI has only existed for a little over 2 years, and their grant-making has been pretty lumpy, so I don’t think they’ve yet reached any equilibrium grant-making rate (one which could be believably expressed in terms of $X dollars / year).
I agree. Though I think I expect the ratio of
funds-distributed/staff
to roughly stay the same, at least for a bit, and probably go up a bit.I think older and larger organizations will have smaller
funds-distributed/staff
ratios, but I think that’s mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.I would be in favour of this fund using ~5% of its money to pay for staff costs, including a permanent secretariat. The secretariat would probably decrease pressure on grantmakers a little, and improve grant/feedback quality a little, which makes the costs seem worth it. (I know you’ve already considered this and I want to encourage it!)
I imagine the secretariat would:
-Handle the admin of opening and advertising a funding round
-Respond to many questions on the Forum, Facebook, and by email, and direct more difficult questions to the correct person
-Coordinate the writing of Forum posts like this
-Take notes on what additional information grantmakers would like from applicants, contact applicants with follow-up questions, and suggest iterations of the application form
-(potentially) Manage handover to new grantmakers when current members step down
-(potentially) Sift through applications and remove those which are obviously inappropriate for the Long Term Future Fund
-(potentially) Provide a couple of lines of fairly generic but private feedback for applicants
This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don’t think you’ll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don’t have the time.
The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it’d be too difficult to effectively accomplish in the timeframe of a few months.
In general, one issue the EA Funds face other granting bodies in EA don’t face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it’s distributed, is much more complicated than ones the CEA or a similar organization typically faces.
Thanks for the care & attention you’re putting towards all of these replies!
Strong +1.
One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn’t a “job” in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.