The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
In some cases the applicant asked for less than our minimum grant amount of $10,000
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant.
I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we’ve had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).
I don’t think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.
As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don’t know Tyler that well, I wasn’t sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)
The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it’s often very hard to get a good sense of the competence of an applicant.
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it’s hard to get $1k grants.
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people’s competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.
To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).
It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
Hmm, I guess it depends a bit on how you view this.
If you model this in terms of “total financial resources going to EA-aligned people”, then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.
If you want to model it as “money controlled directly by EA institutions” then it’s closer to your number.
I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
[1] I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
I suspect there’s low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you’d want each person to be located in a different geographic EA hub. (Perhaps you’d want supernetworkers covering disparate online communities devoted to EA as well.)
This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of “Fund the EA Hotel”, maybe the key intervention is “Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses.” (BTW, the EA Hotel is actually a pretty good place to be if you’re an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)
To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network.
That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:
Matt Fallshaw: Australia (but also travels a lot)
Helen Toner: Georgetown (I think)
Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
Matt Wage: New York
I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).
Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
At least for me this doesn’t really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I’ve worked with a lot of people on a lot of projects over the years. I’ve discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.
The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.
It’s not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don’t really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.
I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone’s competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).
Thanks for the transparent answers.
This in particular strikes me as understandable but very unfortunate. I’d strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.
I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we’ve had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).
I don’t think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.
As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don’t know Tyler that well, I wasn’t sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)
The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it’s often very hard to get a good sense of the competence of an applicant.
A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it’s hard to get $1k grants.
Wow! This is an order of magnitude larger than I expected. What’s the source of the overhead here?
Here is my rough fermi:
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people’s competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k
+ 0.3 * $60k) * 1.5 = $252k
per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.
If one person-year is 2000 hours, then that implies you’re valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I’m sure there are other overheads that I don’t know about, but I’m curious if you (or someone from CEA) knows what they are?
[Not trying to imply that CEA is failing to optimize here or anything—I’m mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).
It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
It’s not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total “EA dollars” that the positions cost whereas your model seems to combine “EA dollars” (CEA costs) and “personal dollars” (their total salary).
Hmm, I guess it depends a bit on how you view this.
If you model this in terms of “total financial resources going to EA-aligned people”, then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.
If you want to model it as “money controlled directly by EA institutions” then it’s closer to your number.
I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.
I don’t particularly want to try to resolve the disagreement here, but I’d think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?
[1] I’d expect it to vary from person to person depending on their alignment, commitment, competence, etc.
I think you have some math errors:
$150k * 1.5 + $60k = $285k rather than $295k
Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
Ah, yes. The second one. Will update.
(moved this comment here)
I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.
I suspect there’s low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you’d want each person to be located in a different geographic EA hub. (Perhaps you’d want supernetworkers covering disparate online communities devoted to EA as well.)
This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of “Fund the EA Hotel”, maybe the key intervention is “Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses.” (BTW, the EA Hotel is actually a pretty good place to be if you’re an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)
That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:
Matt Fallshaw: Australia (but also travels a lot)
Helen Toner: Georgetown (I think)
Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
Matt Wage: New York
I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).
Good to know!
Isn’t Matt in HK?
He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)
Update: I was just wrong, Matt is indeed primarily HK
Boy, there are two Matts in that list.
At least for me this doesn’t really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I’ve worked with a lot of people on a lot of projects over the years. I’ve discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.
The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.
It’s not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don’t really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.
I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone’s competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).