Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out.
I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.
Here are a couple of things driving up my bar:
EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ‘central cases’ of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didn’t fully succeed in this aim this time round.)
Something particularly promising which I don’t fund is fairly likely to get funded by others, whereas something harmful I fund can’t be cancelled by others, so I want to be fairly cautious while I’m starting out in grant making.
Some further things pushing me towards lowering my bar:
It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it’s generally agreed that direct work seems more impactful for them. Our common intuitions for spending money don’t hold anymore – e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I don’t want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself ‘should this be funded by the EAIF or a large aligned donor?’
Some further things increasing my bar:
If we routinely fund mediocre work, there’s little real incentive for grantseekers to strive to produce truly outstanding work.
Basically everything Jonas and Michelle have said on this sounds right to me as well.
Maybe a minor difference:
I certainly agree that, in general, donor preferences are very important for us to pay attention to.
However, I think the “bar” implied by Michelle’s “important for all the donations to be at least somewhat explicable to the majority of its donors” is slightly too high.
I instead think that it’s important that a clear majority of donors endorses our overall decision procedure. [Or, if they don’t, then I think we should be aware that we’re probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable).
Though I would be interested to learn if EAIF donors disagreed with this.
(It’s a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/donors and weights propotional to grant/donation size, while being closer to the latter.)
This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed… Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it’s generally agreed that direct work seems more impactful for them
I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *
Is this because you aren’t getting those senior people applying? Or are there other constraints?
* (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)
I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *
This is two misconceptions:
(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.
(2) The grant is not being entirely allocated to intern compensation
Interns at Rethink Priorities currently earn $23-25/hr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/yr (prorated for part-time work).
I notice that the listed grants seems substantially below $1000/hour (…)
Is this because you aren’t getting those senior people applying? Or are there other constraints?
The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/y when working at an EA organization. It would seem really weird for that person to ask for a $5 million / year salary, especially given that they’d most likely want to donate most of that anyway.
Cool, for what it’s worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/year, and your comments seem to indicate that you would be happy to fund at least some people at that level.
So I remain kind of confused why the grant amounts are so small.
If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.
FWIW, I also don’t really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I’d be keen to hear more about that.
Speaking just for myself: I don’t think I could currently define a meaningful ‘minimum absolute bar’. Having said that, the standard most salient to me is often ‘this money could have gone to anti-malaria bednets to save lives’. I think (at least right now) it’s not going to be that useful to think of EAIF as a cohesive whole with a specific bar, let alone explicit criteria for funding. A better model is a cluster of people with different understandings of ways we could be improving the world which are continuously updating, trying to figure out where we think money will do the most good and whether we’ll find better or worse opportunities in the future.
Here are a couple of things pushing me to have a low-ish bar for funding:
I think EA currently has substantially more money than it has had in the past, but hasn’t progressed as fast in figuring out how to turn that into improving the world. That makes me inclined to fund things and see how they go.
As a new committee, it seems pretty good to fund some things, make predictions, and see how they pan out.
I’d prefer EA to be growing faster than it currently is, so funding projects now rather than saving the money to try to find better projects in future looks good to me.
Here are a couple of things driving up my bar:
EAIF gets donations from a broad range of people. It seems important for all the donations to be at least somewhat explicable to the majority of its donors. This makes me hesitant to fund more speculative things than I would be with my money, and to stick more closely to ‘central cases’ of infrastructure building than I otherwise would. This seems particularly challenging for this fund, since its remit is a bit esoteric, and not yet particularly clearly defined. (As evidenced by comments on the most recent grant report, I didn’t fully succeed in this aim this time round.)
Something particularly promising which I don’t fund is fairly likely to get funded by others, whereas something harmful I fund can’t be cancelled by others, so I want to be fairly cautious while I’m starting out in grant making.
Some further things pushing me towards lowering my bar:
It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it’s generally agreed that direct work seems more impactful for them. Our common intuitions for spending money don’t hold anymore – e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I don’t want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself ‘should this be funded by the EAIF or a large aligned donor?’
Some further things increasing my bar:
If we routinely fund mediocre work, there’s little real incentive for grantseekers to strive to produce truly outstanding work.
Basically everything Jonas and Michelle have said on this sounds right to me as well.
Maybe a minor difference:
I certainly agree that, in general, donor preferences are very important for us to pay attention to.
However, I think the “bar” implied by Michelle’s “important for all the donations to be at least somewhat explicable to the majority of its donors” is slightly too high.
I instead think that it’s important that a clear majority of donors endorses our overall decision procedure. [Or, if they don’t, then I think we should be aware that we’re probably going to lose those donations.] I think this would ideally be compatible with only most donations being somewhat explicable (and a decent fraction, probably a majority, to be more strongly explicable).
Though I would be interested to learn if EAIF donors disagreed with this.
(It’s a bit unclear how to weigh both donors and grants here. I think the right weights to use in this context are somewhere in between uniform weights across grants/donors and weights propotional to grant/donation size, while being closer to the latter.)
I notice that the listed grants seems substantially below $1000/hour; e.g. Rethink getting $250,000 for seven FTEs implies ~$35,000/FTE or roughly $18/hour. *
Is this because you aren’t getting those senior people applying? Or are there other constraints?
* (Maybe this is off by a factor of two if you meant that they are FTE but only for half the year etc.)
This is two misconceptions:
(1) we are hiring seven interns but they each will only be there for three months. I believe it is 1.8 FTE collectively.
(2) The grant is not being entirely allocated to intern compensation
Interns at Rethink Priorities currently earn $23-25/hr. Researchers hired on a permanent basis earn more than that, currently $63K-85K/yr (prorated for part-time work).
The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/y when working at an EA organization. It would seem really weird for that person to ask for a $5 million / year salary, especially given that they’d most likely want to donate most of that anyway.
Cool, for what it’s worth my experience recruiting for a couple EA organizations is that labor supply is elastic even above (say) $100k/year, and your comments seem to indicate that you would be happy to fund at least some people at that level.
So I remain kind of confused why the grant amounts are so small.
If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.
FWIW, I also don’t really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I’d be keen to hear more about that.