EA Infrastructure Fund Ask Us Anything (January 2024)
The EA Infrastructure Fund (EAIF) is running an Ask Us Anything! This is a time where EAIF grantmakers have set aside some time to answer questions on the Forum. I (Tom) will aim to answer most questions next weekend (~January 20th), so please submit questions by the 19th.
Please note: We believe the next three weeks are an especially good time to donate to EAIF, because:
We continue to face signficant funding constraints, leading to many great projects going either unfunded or underfunded
Your donation will be matched at a 2:1 ratio until Feb 2. EAIF has ~$2m remaining in available matching funds, meaning that (unlike LTFF) this match is unlikely to be utilised without your support
If you agree, you can donate to us here.
About the Fund
The EA Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Over 2022 and H1 2023, we made 347 grants totalling $13.4m in dispersement. You can see our public grants database here.
Related posts
EA Infrastructure Fund’s Plan to Focus on Principles-First EA
EA Funds organizational update: Open Philanthropy matching and distancing
About the Team
Tom Barnes: Tom is currently a Guest Fund Manager at EA Infrastructure Fund (previously an Assistant Fund Manager since ~Oct 2022). He also works as an Applied Researcher at Founders Pledge, currently on secondment to the UK Government to work on AI policy. Previously, he was a visiting fellow at Rethink Priorities, and was involved in EA uni group organizing.
Caleb Parikh: Caleb is the project lead of EA Funds. Caleb has previously worked on global priorities research as a research assistant at GPI, EA community building (as a contractor to the community health team at CEA), and global health policy. Caleb currently leads EAIF as interim chair.
Linchuan Zhang: Linchuan (Linch) Zhang currnetly works full-time at EA Funds. He was previously a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
Ask Us Anything
We’re happy to answer any questions – marginal uses of money, how we approach grants, questions/critiques/concerns you have in general, what reservations you have as a potential donor or applicant, etc.
There’s no hard deadline for questions, but I would recommend submitting by the 19th January as I aim to respond from the 20th
As a reminder, we remain funding-constrained, and your donation will be matched (for every $1 you donate, EAIF will receive $3). Please consider donating!
If you have projects relevant to builiding up the EA community’s infrastructure, you can also apply for funding here.
CEEALAR provides cost-effective support to EAs working on global catastrophic risks in the form of free or subsidised coliving and coworking spaces, but has so far been unsuccessful in securing EAIF grants. Does EAIF see a role for this type of infrastructure? If so, what would you like it to look like?
(note that I’m not speaking about CEEALAR or any other specific EAIF applicants/grantees specifically)
I understand that CEEALAR has created a low-cost hotel/coworking space in the UK for relatively junior people to stay while they work on research projects relevant to GCRs. I think that you had some strategic updates recently so some of my impression of your work may be out of date. Supporting people early on in their impact-focused careers seems really valuable, I’ve seen lots of people go through in-person retreats and quickly start doing valuable work.
At the same time, I think projects that take lots of junior people and put them in the same physical space for an extended period whilst asking them to work on important and thorny questions have various risks (e.g. negative effects on mental health, attracting negative press to EA, trapping people in suboptimal learning environments).
I think some features of projects in this reference class I’d be excited to see (though it’s NOT a list of requirements):
* located in an existing hub so that program participants have plenty of people outside the program to interact with
* generally taking people with good counterfactual options outside of EA areas so that people don’t feel “trapped” and because this is correlated with being able to do very useful stuff within EA cause areas quickly
* trying to foster an excellent intellectual environment—ideally, there would be a critical mass of thoughtful people and truth-seeking epistemic norms
* having a good track record of a high proportion of people leaving and entering high-impact roles
* taking community health seriously, incidents should be handled in a professional manner and generally, projects should adhere to sensible best practices (e.g. amongst full-time staff, there shouldn’t be romantic relationships between managers and their direct reports)
I recently spent some time in the Meridian Office, a co-working space in Cambridge UK for people working on pressing problems, which seems to be doing a good job on all of the points above (though I haven’t evaluated them properly).
(Note that I don’t mean to imply that CEEALAR is or isn’t doing well on the above points, as I don’t want to talk about specific EAIF grantees.)
I don’t understand this consideration. It seems to me that people located in a place with a more robust existing community are the people that would counterfactually benefit the least from a place to interact with other EAs, because they have plenty of opportunities to do so already.
I’m assuming by “hub” you mean “EA hub”, but if by “hub” you mean “a place with high population density/otherwise a lot of people to talk to”, then this makes sense.
(Full disclosure: I was a grantee of CEEALAR last year; but I’m thinking about this in the context of non-residential office/co-working spaces like Meridian Office).
I agree that people in existing EA hubs are more likely to come across others doing high value work than people located outside of hubs.
That said, on the current margin, I still think many counterfactually connections happen at office spaces in existing EA hubs. In the context of non residential spaces, I’m not really sure who would use an EA office space outside existing EA hubs so I’m finding the comparison between office in a hub vs office outside a hub a little confusing (whereas with CEEALAR I understand who would use it).
I imagine there could be a useful office in a city with ~20 people using it regularly and ~100 people interested enough in EA to come to some events, and I wouldn’t think of that city as an “EA hub”.
I also think eg. a London office has much more value than an eg. an Oxford or Cambridge office (although I understand all three to be hubs), even though Oxford and Cambridge have a higher EA-density.
Have there been (m)any grants for which—from the ex ante perspective—you really wish you hadn’t made that grant? If so, what did you/EAIF learn from that experience?
I go back and forth on this. Sometimes, I feel like we are funding too many underperforming projects, but then some marginal project surprises me by doing quite well, and I feel better about the hits-based strategy. Over the last three months, we have moved towards funding things that we feel more confident in, mostly due to funding constraints.
I don’t think that I have a great list of common communicable lessons, some high-level thoughts/updates that jump to mind:
in general, people will be worse than they expect when working in areas they have little experience in.
making grants to people who are either inside the community or have legible credentials is often much cheaper in terms of evaluation time than making grants to random people who apply who aren’t connected to the community, but being too insular in our grantmaking is probably unhelpful for the long-term health of the community—balancing these factors is hard
The social skills and professionalism of grantees are probably more important than I used to think they were—funding people who are extremely ambitious but are unreliable or unprofessional seems to have lots of hidden costs.
sometimes it’s worth encouraging a grantee to pursue a role at an established organisation even if they are above the bar for a grant—there are lots of downsides of grants that the grantee might not be tracking, and overall, I think it’s ok to be a bit more paternalistic than I use to think.
What are some kinds of projects you’d like to see more of (in terms of applications)?
Semi-relatedly, I’d also be interested in hearing about some of EAIF’s past grants that you think were particularly exciting/successful (this might have already gotten covered in one of the other posts — apologies if so!).
Good Question! We have discussed running RFP(s) to more directly support projects we’d like to see. First, though, I think we want to do some more strategic thinking about the direction we want EAIF to go in, and hence at this stage I think we are fairly unsure about which project types we’d like to see more of.
Caveats aside, I personally[1] would be pretty interested in:
Macrostrategy / cause prioritization research. I think a substantional amount of intellectual progress was made in the 2000s / early 2010s from a constellation of different places (e.g. early FHI, the rationality community, Randomistas, GiveWell, etc) which led to the EA community developing some crucial ideas. Sadly, I think we have seen less of that “raw idea generation process” in recent times. I’d be pretty excited if there was a project that was able to revive this spirit, although I think it would be (very) difficult to pull off.
High quality communications of EA Principles. Many core EA ideas are hard to communicate, especially in low bandwith formats. In practice I think this means that discourse around EA (e.g. on twitter) is pretty poor (and worsening). Whilst there’s been work to remedy this in specific cause areas (like AISCC), there don’t seem to be many public communications champions of EA as an intellectual project, nor as a community of people earnestly aiming to improve the world. Again, I think this is hard to remedy, and easy to get wrong, but I would be pretty excited for someone to try.
Fundraising. Promising projects across all cause areas are going unfunded due to funding constraints (EAIF included). I’m additionally worried that there are several fundraising organisations - who’s principle goal is “fund EA/EA-ish projects”—are distancing themselves from the EA label, leaving projects (especially in the EA community) without a source of funding.
Not speaking for EAIF / EA Funds / EV
Can you provide a rationale/cost-effectiveness estimate as to why a dollar donated to EAIF goes further to save lives/suffering reduction than a GiveWell Top charity?
Hey, good question!
Here’s a crude rationale:
Suppose that by donating $1k to an EAIF project, they get 1 new person to consider donating more effectively.
This 1 new person pledges to give 1% of their salary to GiveWell’s top charities, and they do this for the next 10 years.
If they make (say) $50k / year, then over 10 years they will donate $5k to GiveWell charities.
The net result is that a $1k donation to EAIF led to $5k donated to top GiveWell charities—or a dollar donated to EAIF goes 5x further than a GiveWell Top charity
Of course, there are a bunch of important considerations and nuance that have been ignored in this hypothetical—indeed, I think it’s pretty important to be cautious / suspicious about calculations like the above, so we should often discount the “multiplier” factor signficantly. Nonetheless, I think (some version of) the above argument goes through for a number of projects EAIF supports.
I’m pretty excited about a vision of principles-first EAIF that helps drive forward better cause prioritization within EAIF, and resonated with this paragraph:
But I’m not sure how many exciting grants you expect to find within bucket 2 — scanning through the EAIF grants database, I can’t really identify anything within the last year that seems like it would push the frontier of “helping EA notice what it is important to work on”.
How successful do you expect to be at soliciting grants within this area / do you have a plan on how to go about with this? (Feel free to object to the premise too.)
I think the premise of your question is roughly correct: I do think it’s pretty hard to “help EA notice what it is important to work on”, for a bunch of reasons:
It could lead to new, unexpected directions which might be counterintuive / controversial.
it requires the community to have the psychological, financial and intellectual safety to identify / work on causes which may not be promising
It needs a non-trivial number of people to engage with the result of exploration, and act upon it (including people who can direct substantial resources)
It has a very long feedback loop, which can be a) demoralising, and b) difficult to predict if it ever has an impact.
Given those challenges, it’s not suprising to me if we struggle to find many projects in this area. To overcome that I think we would need to take a more active approach (e.g. RFPs, etc). But we are still in the early days of thinking about these kinds of questions
What seem to be the most cost effective ways to explain and possibly spread the ideas and principles of effective altruism, according to your experience as fund managers? Are there some models that seem to work in many contexts/at scale or do these projects depend heavily on the performance/talent of the individual grantees and the context?
I think the performance/talent of grantees and context is extremely important.
That said, some programs that I am excited about that I believe many EAs are a good fit for:
University EA groups, particularly ones at top universities
Field-specific retreats/workshops/boot camps etc.
Career advising calls and other career-focused content
Writing high-quality blog posts
Some projects I have seen work well in the past, but I think they are a bad fit for most people:
Youtube channels
Mass media comms (like writing an op-ed in a popular newspaper)
Most of my views on this topic are informed by this survey.
What advice would you give to someone applying for the first time for the grant? Also, can you share some non-obvious mistakes applicants make?
My boring answer would be to see details on our website. In terms of submission style, we say:
You can find details on the scope of grants that EAIF will consider funding for here (although this is subject to change—details here).
For non-obvious mistakes, some examples that come to mind are:
Unclear theory of change—I think good applications often have a clear sense of what they’re trying to acheive, and how they plan to acheive it. This may seem relatively obvious, but I think still often goes underestimated. Put another way: it’s very rare for me to think “this applicant has thought about their path to impact too much”
Providing too little information—whilst we do recommend that applicants don’t take too long to write applications, it can be hard to make well evidenced decisions without having much information to go on. For projects that are clearly great / terrible this is less of an issue, but projects close to the bar do benefit from some (at least basic) info.
Providing too much (irrelevant) information—On the flip side, a large amount of (irrelevant) information can distract from the core case for the project. E.g. if an applicant does not have track record in an area they’re looking to move towards, I much prefer that they directly state this rather than include highly irrelevant info to fill the page.
Not providing any references—We often reach out to references, who can give a more detailed opinion on the applicant and/or their project plan. Without any 3rd party to contact, it can be difficult to verify claims made in an application.
Optimising for p(receive grant) rather than Impact—this is a tricky one, since people apply for projects which they believe are highly impactful, and an obvious instrumental goal to that happening is to get funding. But ultimately, it’s worth being upfront and honest about weakenesses, since ultimately our common goal is to do the most good, and perusasion / deception undermine that (even if this increases p(receive grant))
Intepreting rejection (or success) too strongly- The grant appplication process (like job applications) is extremely noisy, in which a single decision gives limited evidence about an application. Of course, this advise goes both ways—it is not literally 0 evidence, and some projects shouldn’t be funded—but I do worry if people over-update on a rejection from EAIF, especially when they are pretty close to the bar
Do you evaluate the impact of EA Infrastructure grants? If yes, how? And do you plan to publish these impact evaluations? If not, what are the challenges?
Currently we don’t have a process for retroactively evaluating EAIF grants. However, there are a couple of informal channels which can help to improve decision-making:
We request that grantees fill out a short form detailing the impact of their grant after six months. These reports are both directly helpful for evaluating a future application from the grantee, and indirectly helpful at calibrating the “bang-for-your-buck” we should expect from different grant sizes for different projects
When evaluting the renewal of a grant, we can compare the initial application’s plans with the track record they list in a later application, to see if the grant was a success on their own terms.
One technique I’ve picked up is evaluating grants in reverse—reading the details of the project, and then giving a rough estimate of a willingness to pay for a project of that nature. Looking at the actual cost of the project can then help quickly determine if it meets a bar for funding that I’ver pre-registered
I think a lack of a proper M&E function is a problem, and one that I would be keen to address longer term
In your view, how much of the past EAIF grants were a “win”—similarly or more cost-effective than a typical grant to a Give Well top charity? What would be a rough win/loss ratio in terms of number of grants and their $ value.
Hey—I think it’s important to clarify that EAIF is optimising for something fairly different from GiveWell (although we share the same broad aim):
Specifically, GiveWell is optimising for lives saved in the next few years, under the constraint of health projects in LMICs, with a high probability of impact and fairly immediate / verifable results.
Meanwhile, EAIF is focused on a hits-based, low-certainty area, where the evidence base is weaker, grants have longer paths to impact, and the overarching goal is often unclear.
As such, a direct/equivalent comparison is fairly challenging, with our “bar for funding” fairly different to GiveWell’s. The other caveat is that we don’t have a systematic process for retroactively classifying grants as “wins” or “losses”—our current M&E process is much more fuzzy.
Given this, any answer about the cost-effectiveness of GiveWell vs EAIF will be pretty subjective and prone to error.
Nonetheless, my personal opinion is that the mean EAIF grant is likely more impactful than the typical GiveWell grant. Very briefly, this is becuase:
I think many of our grants have / would have a >1x multiplier on donations to GiveWell top charities, if we evaluated them under this framework (as outlined here)
Further, I think there are more impactful ways to save / improve the lives of current people than donating to GiveWell’s top charities; and I think there are even greater opportunities for impact (via improving animal welfare, or the long-term future). Many of EAIF’s grantees cover more than just fundraising for effective global health charities, and thus I expect they will (on average) have a higher impact
But this is just my personal view, contingent on a very large number of assumptions, which people very reasonably disagree on.
Would you consider making retroactive grants? I saw that the LTFF did a few. If you did, how would you evaluate them differently from the usual grants for future work?
I’m personally interested in retroactive grants for cause prioritization research.