My most important uncertainty for many decisions was where the āminimum absolute barā for any grant should be. I found this somewhat surprising.
Put differently, I can imagine a āreasonableā fund strategy based on which we would have at least a few more grants; and I can imagine a āreasonableā fund strategy based on which we would have made significantly fewer grants this round (perhaps below 5 grants between all fund managers).
This also seems to me like quite an important issue. It seems like reminiscent of Open Philās idea of making grants āwhen they seem better than our ālast dollarā (more discussion of the ālast dollarā concept here), and [saving] the money instead when they donātā.
Could you (any fund managers, including but not limited to Max) say more about how you currently think about this? Subquestions include:
What do you currently see as the āminimum absolute barā?
How important do you think that bar is to your grantmaking decisions?
What factors affect your thinking on these questions? How do you approach these questions?
I feel very unsure about this. I donāt think my position on this question is very well thought through.
Most of the time, the reason I donāt want to make a grant doesnāt feel like āthis isnāt worth the moneyā, it feels like āmaking this grant would be costly for some other reasonā. For example, when someone applies for a salary to spend some time researching some question which I donāt think theyād be very good at researching, I usually donāt want to fund them, but this is mostly because I think itās unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermismās last dollar.
I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to āwhere should the bar beā seems like a worse use of my time than those other activities.
I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.
(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I donāt think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)
I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make
Am I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1ā4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact)
If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make? See related question.
Speaking just for myself: I donāt think I could currently define a meaningful āminimum absolute barā. Having said that, the standard most salient to me is often āthis money could have gone to anti-malaria bednets to save livesā. I think (at least right now) itās not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether weāll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
I think EA currently has substantially more money than it has had in the past, but hasnāt progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out.
Iād prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.
Here are a couple of things driving up my bar:
EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ācentral casesā of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didnāt fully succeed in this aim this time round.)
Something particularly promising which I donāt fund is fairly likely to get funded by others, whereas something harmful I fund canāt be cancelled by others, so I want to be fairly cautious while Iām starting out in grant making.
Some further things pushing me towards lowering my bar:
It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but itās generally agreed that direct work seems more impactful for them. Our common intuitions for spending money donāt hold anymore ā e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I donāt want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself āshould this be funded by the EAIF or a large aligned donor?ā
Some further things increasing my bar:
If we routinely fund mediocre work, thereās little real incentive for grantseekers to strive to produce truly outstanding work.
Basically everything Jonas and Michelle have said on this sounds right to me as well.
Maybe a minor difference:
I certainly agree that, in general, donor preferences are very important for us to pay attention to.
However, I think the ābarā implied by Michelleās āimportant for all the donations to be at least somewhat explicable to the majority of its donorsā is slightly too high.
I instead think that itās important that a clear majority of donors endorses our overall decision procedure. [Or, if they donāt, then I think we should be aware that weāre probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable).
Though I would be interested to learn if EAIF donors disagreed with this.
(Itās a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/ādonors and weights propotional to grant/ādonation size, while being closer to the latter.)
This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeedā¦ Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but itās generally agreed that direct work seems more impactful for them
I notice that the listed grants seems substantially below $1000/āhour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/āFTE or roughly $18/āhour. *
Is this because you arenāt getting those senior people applying? Or are there other constraints?
* (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)
I notice that the listed grants seems substantially below $1000/āhour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/āFTE or roughly $18/āhour. *
This is two misconceptions:
(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.
(2) The grant is not being entirely allocated to intern compensation
Interns at Rethink Priorities currently earn $23-25/āhr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/āyr (prorated for part-time work).
I notice that the listed grants seems substantially below $1000/āhour (ā¦)
Is this because you arenāt getting those senior people applying? Or are there other constraints?
The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/āy when working at an EA organization. It would seem really weird for that person to ask for a $5 million /ā year salary, especially given that theyād most likely want to donate most of that anyway.
Cool, for what itās worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/āyear, and your comments seem to indicate that you would be happy to fund at least some people at that level.
So I remain kind of confused why the grant amounts are so small.
If you have to pay fairly (i.e., if you pay one employee $200k/āy, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/āy can be >$1m/āy. That may still be worth it, but less clearly so.
FWIW, I also donāt really share the experience that labor supply is elastic above $100k/āy, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. Iād be keen to hear more about that.
Because the EAIF is aiming to grow the overall resources and capacity for improving the world, one model is simply āis the growth rate greater than zero?ā Some of the projects we donāt fund to me look like they have a negative growth rate (i.e., in expectation, they wonāt achieve much, and the money and time spent on them will be wasted), and these should obviously not be funded. Beyond that, I donāt think itās easy to specify a āminimum absolute barā.
Furthermore, one straightforward way to increase the EA communityās resources is through financial investments, and any EA projects should beat that bar in addition to returning more than they cost. (I donāt think this matters much in practice, as weāre hoping for growth rates much greater than typical in financial markets.)
Thanks for doing this AMA!
In the recent payout report, Max Daniel wrote:
This also seems to me like quite an important issue. It seems like reminiscent of Open Philās idea of making grants āwhen they seem better than our ālast dollarā (more discussion of the ālast dollarā concept here), and [saving] the money instead when they donātā.
Could you (any fund managers, including but not limited to Max) say more about how you currently think about this? Subquestions include:
What do you currently see as the āminimum absolute barā?
How important do you think that bar is to your grantmaking decisions?
What factors affect your thinking on these questions? How do you approach these questions?
I feel very unsure about this. I donāt think my position on this question is very well thought through.
Most of the time, the reason I donāt want to make a grant doesnāt feel like āthis isnāt worth the moneyā, it feels like āmaking this grant would be costly for some other reasonā. For example, when someone applies for a salary to spend some time researching some question which I donāt think theyād be very good at researching, I usually donāt want to fund them, but this is mostly because I think itās unhealthy in various ways for EA to fund people to flail around unsuccessfully rather than because I think that if you multiply the probability of the research panning out by the value of the research, you get an expected amount of good that is worse than longtermismās last dollar.
I think this question feels less important to me because of the fact that the grants it affects are marginal anyway. I think that more than half of the impact I have via my EAIF grantmaking is through the top 25% of the grants I make. And I am able to spend more time on making those best grants go better, by working on active grantmaking or by advising grantees in various ways. And coming up with a more consistent answer to āwhere should the bar beā seems like a worse use of my time than those other activities.
I think I would rather make 30% fewer grants and keep the saved money in a personal account where I could disburse it later.
(To be clear, I am grateful to the people who apply for EAIF funding to do things, including the ones who I donāt think we should fund, or only marginally think we should fund; good on all of you for trying to think through how to do lots of good.)
Am I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1ā4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact)
If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make? See related question.
This is indeed my belief about ex ante impact. Thanks for the clarification.
Speaking just for myself: I donāt think I could currently define a meaningful āminimum absolute barā. Having said that, the standard most salient to me is often āthis money could have gone to anti-malaria bednets to save livesā. I think (at least right now) itās not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether weāll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
I think EA currently has substantially more money than it has had in the past, but hasnāt progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out.
Iād prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.
Here are a couple of things driving up my bar:
EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ācentral casesā of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didnāt fully succeed in this aim this time round.)
Something particularly promising which I donāt fund is fairly likely to get funded by others, whereas something harmful I fund canāt be cancelled by others, so I want to be fairly cautious while Iām starting out in grant making.
Some further things pushing me towards lowering my bar:
It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but itās generally agreed that direct work seems more impactful for them. Our common intuitions for spending money donāt hold anymore ā e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I donāt want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself āshould this be funded by the EAIF or a large aligned donor?ā
Some further things increasing my bar:
If we routinely fund mediocre work, thereās little real incentive for grantseekers to strive to produce truly outstanding work.
Basically everything Jonas and Michelle have said on this sounds right to me as well.
Maybe a minor difference:
I certainly agree that, in general, donor preferences are very important for us to pay attention to.
However, I think the ābarā implied by Michelleās āimportant for all the donations to be at least somewhat explicable to the majority of its donorsā is slightly too high.
I instead think that itās important that a clear majority of donors endorses our overall decision procedure. [Or, if they donāt, then I think we should be aware that weāre probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable).
Though I would be interested to learn if EAIF donors disagreed with this.
(Itās a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/ādonors and weights propotional to grant/ādonation size, while being closer to the latter.)
I notice that the listed grants seems substantially below $1000/āhour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/āFTE or roughly $18/āhour. *
Is this because you arenāt getting those senior people applying? Or are there other constraints?
* (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)
This is two misconceptions:
(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.
(2) The grant is not being entirely allocated to intern compensation
Interns at Rethink Priorities currently earn $23-25/āhr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/āyr (prorated for part-time work).
The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/āy when working at an EA organization. It would seem really weird for that person to ask for a $5 million /ā year salary, especially given that theyād most likely want to donate most of that anyway.
Cool, for what itās worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/āyear, and your comments seem to indicate that you would be happy to fund at least some people at that level.
So I remain kind of confused why the grant amounts are so small.
If you have to pay fairly (i.e., if you pay one employee $200k/āy, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/āy can be >$1m/āy. That may still be worth it, but less clearly so.
FWIW, I also donāt really share the experience that labor supply is elastic above $100k/āy, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. Iād be keen to hear more about that.
Because the EAIF is aiming to grow the overall resources and capacity for improving the world, one model is simply āis the growth rate greater than zero?ā Some of the projects we donāt fund to me look like they have a negative growth rate (i.e., in expectation, they wonāt achieve much, and the money and time spent on them will be wasted), and these should obviously not be funded. Beyond that, I donāt think itās easy to specify a āminimum absolute barā.
Furthermore, one straightforward way to increase the EA communityās resources is through financial investments, and any EA projects should beat that bar in addition to returning more than they cost. (I donāt think this matters much in practice, as weāre hoping for growth rates much greater than typical in financial markets.)