This is the July 2020 payout report for the Effective Altruism Meta Fund, one of the Effective Altruism Funds.
Fund: Effective Altruism Meta Fund
Payout date: August 7, 2020
Payout amount: $838,000.00
Grant author(s): Luke Ding, Alex Foster, Denise Melchin, Matt Wage, Peter McIntyre
Grant recipients:
Grant rationale:
The EA Meta Fund made the following grant recommendations in the July 2020 round:
80,000 Hours - $300k
Founders Pledge - $200k
The Future of Humanity Foundation - $200k
WANBAM (Women and Non-Binary Altruism Mentorship) - $80k
gieffektivt.no - $30k
RC Forward - $15k
EA Netherlands—workshops for professionals involved in policymaking - $13k
In this grant round, we focused on both well-established and early-stage organizations. While there is higher uncertainty in funding early-stage projects, we think there is also significant value. Much of this value comes in the form of new information on what works and what doesn’t, which can be used to inform future efforts to maximize impact.
Below are some of the key considerations behind our grant decisions. As with the previous rounds, these summaries are by no means meant to be read as complete or exhaustive cases for each grant. They are based on a series of internal conversations between the fund managers, as well as with the grantees, incorporating our past experience, knowledge, and judgment. While risks and reservations for these organizations have been taken into account, we do not discuss them below in most cases.
Applications
If there is a meta initiative that you would like us to consider for a future grant, please complete this form.
Questions
Please send any questions about the Fund to jonas.vollmer at centreforeffectivealtruism.org.
80,000 Hours - $300k
80,000 Hours aims to resolve skill bottlenecks in fields relevant to addressing the world’s most pressing problems. To do this, they carry out research into how talented individuals can maximize the impact of their careers, produce online advice, identify readers who might be able to enter priority areas, and provide these readers with free in-person advice and connections to mentors, job openings, and funding.
Categories: Talent-leverage, scale-stage
We have made grants to 80,000 Hours (80k) in a number of past grant rounds (see our previous payout reports here), and we continue to view 80k as one of the most impactful meta opportunities available.
As 80k is becoming one of the largest recipients of EA Meta Fund grants in total, we spent more time in the last two rounds trying to bolster the thoroughness of our evaluation.
We reviewed 80k by focusing on four main areas:
Content: 80k’s online content is, in our opinion, consistently very high quality. The core aim of their content is to increase reach and generate initial interest among their target audience. There may be a small number of people who change their career path based on content alone, but most seem to benefit from direct in-person advice.
Podcast: performs a similar function to their text-based content. The reach of the podcast is 10x lower than other content, but it seems to generate much higher engagement per person.
Job board: drives engagement and supports 80k’s end goal of connecting talented people to high-priority jobs. The job board consistently receives a high number of views, generating over 60,000 click-throughs last year.
One-to-one advising and headhunting: direct in-person advice with individuals who could enter 80k’s priority career paths.
80k seems to have continued to make strong progress on their metrics during 2019. Some key updates include:
Replacing the earlier careers guide with their key ideas series.
Releasing 20 podcast episodes, gaining ~370 subscribers per episode.
Carrying out ~40 headhunting searches for top-priority organizations, making ~100 qualified submissions, and tracking 11 placements.
Providing career advice to 244 people. 64% of those who provided feedback rated the advice as a 6 or 7 (out of 7) for usefulness.
Advertising 1,224 positions on their job board and sending over 60,000 clicks through to vacancy pages.
Releasing 33 new pieces of content, although this is less than expected compared to their target of 50.
We believe that the impact of recruiting and community-building efforts often follows a heavy-tailed distribution, and we agree with the 80k team that a large portion of their long-term impact may come from an unexpected source. While 80k seem to have had some significant measurable impact, we expect that much of the value of their work will come from hard-to-measure qualitative sources, such as growing the number of people interested in effective altruism. Over the years, 80k has been one of the largest sources of people first learning about EA. We see their role here as having especially high potential upside.
Having reviewed their metrics and tangible outcomes, we found ourselves regularly coming back to one qualitative argument in particular:
Taking a macro view, if skill bottlenecks are genuinely holding back the growth of high-priority cause areas, we think that we should be willing to spend significant resources to overcome them. We see the potential downside of underinvesting in mitigating these bottlenecks as far outweighing that of overinvesting.
80k remains the primary group appearing to make progress in this clearly challenging domain. This argument does not supersede the need for cost-effectiveness, but does encourage us to put higher strategic priority on qualitative arguments for additional potential upsides.
Note: Peter McIntyre recused himself from this grant evaluation.
Founders Pledge - $200k
Founders Pledge encourages startup founders and investors to sign a legally binding pledge to donate a percentage of their personal exit proceeds to charity. Once the pledge is realized, Founders Pledge supports pledgers to decide where to give in order to have the most positive impact.
Categories: Capital-leverage, scale-stage
We have made grants to Founders Pledge (FP) in two previous grant rounds (see the reports here and here).
FP has continued to achieve impressive top-line metrics. They currently have over 1,400 pledgers and over $2.4bn in pledge value. Every year to date, their pledge value has increased by 100-150% year-on-year, based on venture capitalists’ valuations of their pledgers’ businesses (% pledged * % equity held by founders * $ post-money valuation at last investment round).
Alongside raising pledges, FP also focuses on supporting their pledgers to give to high-impact areas. Since 2015, over $19m has been given to high-impact charities recommended by FP. FP estimates that their research and advice played a significant role in $8m out of this total. Over a third of all donations made by FP pledgers to date have gone to high-impact charities. FP’s research and advisory team, which focuses on supporting pledgers to give to high-impact charities, has grown from 2 to 8 people since 2018.
It is worth noting that the time lag from pledge to exit to donation is generally between five and ten years. Given FP’s fast-growing pledge value, we expect their work to date to result in a total donation volume orders of magnitude higher than the numbers stated in the previous paragraph. So long as FP maintains a high growth rate, their costs should be expected to grow at least several years ahead of their outcomes.
FP has made significant cuts to their operating budget in light of COVID-19. We are keen to see their remaining funding gap filled to avoid further cuts where possible.
As we noted when we last made a grant to FP, we also wish to highlight that their less quantifiable outcomes seem particularly promising. One of their goals is to have a long-lasting positive effect on the culture of smart major philanthropy and, given the continued exponential growth of the value of their network, the prospect of them achieving this goal seems important to take into consideration, despite being highly uncertain.
Note: Luke Ding recused himself from this grant evaluation.
The Future of Humanity Foundation - $200k
The Future of Humanity Foundation is a charitable entity that is being set up to increase the operational capacity of (and relieve operational bottlenecks for) the Future of Humanity Institute (FHI), a research centre based at Oxford University. FHI works on big-picture questions for human civilization and explores what can be done now to ensure a flourishing long-term future. Their research covers macrostrategy, technical AI alignment, AI strategy and governance, and emerging biotechnologies.
Categories: Capacity-building, early-stage
This dual-entity setup has been tried and tested by the Forethought Foundation, which plays a similar role for Oxford’s Global Priorities Institute (GPI). Given the apparent success of the Forethought Foundation using a similar methodology, we are keen to see strategies of operational leverage further explored and experimented with. We see this as potentially highly valuable information for the wider community, as well as being a form of leverage for FHI’s work in general.
Research from FHI, and particularly from its director Nick Bostrom, has been influential in shaping the field of research and policy work focused on safeguarding future generations. Nick is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), and the New York Times bestseller Superintelligence (2014).
We see this as a promising meta initiative because The Future of Humanity Foundation is aiming to leverage FHI’s operations and increase its overall impact. (FHI itself also acts as a meta initiative to some degree, because it provides scholarships, promotes important ideas through popular science books, and trains early-career researchers through its Research Scholars Programme.) The goal of the foundation is to maximize the impact of the work done by FHI by providing financial, operational, and administrative support to further FHI’s research activities, enable research collaborations, and support the hiring of research talent. We expect that The Future of Humanity Foundation would give FHI the ability to execute more quickly, while still maintaining the benefits of being a research centre at a world-leading university.
Nick Beckstead recused himself from advising on this grant.
WANBAM (Women and Non-Binary Altruism Mentorship) - $80k
WANBAM aims to increase retention and improve diversity within the wider effective altruism community. The project is experimenting with achieving this via connecting and supporting a global mentorship network of women and non-binary members of the effective altruism community.
Categories: Talent-leverage, early-stage
This is a second grant to an early-stage project that has shown promising initial results. Much of our reasoning behind the previous grant still stands; see the relevant payout report. We continue to believe that mentorship, when done well, has the potential to be effective in a number of ways, despite being inherently challenging to measure. In our view, two major potential updates of this project are that it could (a) make the EA community more diverse by increasing retention of its women and non-binary members, and (b) improve the community’s welcomingness.
As with any field, there is a community built around effective altruism, which affects talent pipelines, culture, and other hard-to-evaluate metrics. Our view is that positive diversity in a community is valuable in and of itself and seems very likely to have notable effects on the success of communities in general. Given that a lack of diversity is also intuitively self-reinforcing and harder to correct the longer it is left unchecked, we are pleased to see tractable and sensible projects attempting to address this.
This project seems to have done well with initial funding. Both mentors and mentees have given positive feedback, and WANBAM seems to have succeeded in enlisting senior talent as mentors and board directors. Some of the project’s achievements:
There have been ~60 mentorship pairings so far.
62% of participants gave a score of 10⁄10 when asked if they would recommend WANBAM to a friend.
Demand appears to be increasing: in November 2019, WANBAM had 72 applications, while in June 2020, there were 97 applicants.
To date, Kathryn Mecrow-Flynn, the project lead, has been spending ~50% of her time on WANBAM. Given the positive results so far, we felt willing to take a larger bet on this project, funding Kathryn to go full-time.
Beyond the immediate benefits to the mentees involved in this project, we think there may be longer-term benefits for the wider EA community should this initiative be successful: a more diverse and retentive community could lead to a stronger talent pool and greater long-term impact from the movement overall. As with our initial grant, we hope to gain information on what does and does not work and whether mentorship schemes, in general, could be an effective tool for movement building.
gieffektivt.no - $30k
gieffektivt.no is a donation portal based in Norway that fundraises for GiveWell’s top charities (“gi effektivt” means “give effectively”). The project promotes the idea of donating effectively and makes donations easier by lowering transaction costs and offering tax refunds to Norweigian donors.
Categories: Capital-leverage, early-stage
This is an early-stage grant to a project that has achieved promising initial results. As with all early-stage grants, we expect there to be high value of information in testing this opportunity and potentially high upside if this project works out. In general, we expect the experimental value of the early-stage grants we make to be greater than the direct impact of these grants.
gieffektivt.no has moved ~$1m to GiveWell-recommended charities to date, while being run entirely by volunteers (with some administrative support from EA Norway). ~$400,000 of this total was raised in 2019, representing 62% growth year-on-year. Our grant will partially fund gieffektivt.no to hire a project lead to work on the platform full-time.
gieffektivt.no estimates that ~30% of the funds they have raised to date would not have been raised without the platform, and that this number is increasing as they grow and reach new donor groups. This is based on donor surveys, which found that a significant proportion of donors were not already browsing EA sources of information and guidance. We think that these initial results seem promising enough to fund the project through to the next stage.
If their current growth continues, we might expect them to direct around $1m of counterfactually adjusted funds towards GiveWell top charities; their costs for that period are likely to be on the order of $100,000.
With a full-time project lead in place, gieffektivt.no plans to:
Test and improve growth and outreach methods, including campaign improvements, paid advertising, business outreach through talks and events, targeted outreach towards high-net-worth individuals, and improved relationship management with their most valuable donors.
Optimize their work through donor analysis to better understand their donor base and motivations, improvements to the platform system and website, and improved coordination and recruitment of their volunteers.
gieffektivt.no has a few candidates lined up for the full-time role who have been involved with the project previously. They also plan to have an open application process to review other candidates. The grant will go towards funding part of the successful candidate’s first year of salary (the remainder is expected to be made up with other grants and private donations), as well as some technical development and general expenses.
RC Forward - $15k
RC Forward is a donation platform through which Canadians can make tax-advantaged donations to high-impact charities located in and outside of Canada. RC Forward is a project of Rethink Charity. This grant is to fund a new website and improved customer relationship management infrastructure.
Categories: Capital-leverage, early-stage
We made a grant to RC Forward in July 2019, and the core reasoning behind this grant still stands (see the payout report here).
RC Forward appears to fill a valuable niche for Canadian donors. The platform has two main benefits: (1) their recommended charities receive more money from the same size of donation due to tax advantages, and (2) some donors may choose to donate when they otherwise wouldn’t have, or would have donated less.
We expect that the majority of RC Forward’s impact comes from the latter benefit. While the counterfactual impact here is challenging to measure, we are encouraged by two observations:
RC Forward has told us one account of a donor who made a significantly higher donation than they were originally planning to due to the tax advantages offered by the platform. Given RC Forward’s small budget (~$200,000 for 2020), even 1-2 donations like this would likely be sufficient to give RC Forward a good multiplier.
In an independent evaluation, Rethink Priorities estimated that every $1 donated to RC Forward’s operations in 2019 resulted in $6 donated to high-impact charities that would otherwise not have been given. The underlying assumptions used to reach this estimate were based on existing literature on donation fees, tax incentives, and platforms’ ease of use (2019 analysis here and the assumptions used here).
In 2019, RC Forward moved ~$1.5M to their recommended charities. Since 2017, they have moved a total of ~$5M. Money moved in 2019 was lower than 2018 due to a large one-off donation made in 2018. In general, we do expect much of RC Forward’s impact to come from donations of this type, so we are keen to see if they can continue to achieve strong counterfactual donations year-on-year.
We think that RC Forward provides a valuable service for Canadian donors, and we would like to ensure they have sufficient budget to continue their core work. In early 2020, RC Forward began charging a 4% fee on donations in order to cover most of their operating costs. This grant will top-up RC Forward’s budget; the expected use is to improve the website and develop an internal CRM system.
EA Netherlands (Lisa Gotoh and Jan-Willem van Putten) - workshops for professionals involved in policymaking - $13k
This is a one-off grant to develop a series of workshops on implementing EA principles in policymaking for policy professionals in the Netherlands (initially focusing on policy officers, with the option to expand the target audience to politicians and to organizations that work with the government).
If the initial workshops are successful, EA Netherlands intends to start charging attendees for the workshops in order to fund the project on an ongoing basis. If successful, the workshops might also be translated into English and shared with other effective altruism organizations.
Categories: Policy, early-stage
As with all early-stage grants, we expect there to be high value of information in testing this opportunity and potentially high upside if this project works out. In general, we expect the experimental value of the early-stage grants we make to be greater than the direct impact of these grants.
The project was initiated by a former civil servant at the Dutch Ministry of Foreign Affairs and is now supported by EA Netherlands. The project was launched following positive feedback from ministry employees who attended an EA workshop run by the project initiator – there were requests from the participants for a follow-up workshop on how to implement EA principles in policymaking.
The initial workshops will cover three main themes: investments in foreign development, safe development and use of artificial intelligence, and improving decision-making under deep uncertainty. The workshops will focus on raising awareness and offering tools for long-term policy planning, data-driven approaches, and policy research and evaluation.
We anticipate that the outcomes of this project will be challenging to measure, but we were willing to take a bet given the small grant size and the potential upside if the project is successful. We expect that the main potential upside from this project is the opportunity to build stronger networks in the policy space.
Thanks for the work and the concise summaries! I’m really happy with the EA funds.
Reading this I thought that it’s unfortunate that this really valuable information is not communicated as well. Or does this stem from the private and/or person-affecting nature of those reservations? For example, I think I have little intuition about why some projects are considered to have some downside risk and are therefore better not funded/undertaken. Reading more about these kinds of thoughts could be useful.
(I recently joined CEA as Head of EA Funds. Responding from my own perspective, rather than the Meta Fund’s.)
As you said, it’s hard to publish critiques of organizations or the work of particular people without harming someone’s reputation or otherwise posing a risk to the careers of the people involved.
I also agree with you that it’s useful to find ways to talk about risks and reservations.
One potential solution is to talk about the issues in an anonymized, aggregate manner. I have been thinking about whether we could publish sufficiently anonymized examples of risks and reservations to give the community some examples of things we don’t fund – we expect this will make it easier to understand why we reject some applicants. And I’ve given a talk about downside risks and 80,000 Hours have published an article about them.
+1. A major factor is also that writing tastefully and responsibly about the things we are concerned about with an organisation would probably more than double or triple the size of all our write-ups. I’d expect the amount of time it took us to carefully think through those write-ups would be much higher than for the main writeup and we would be more likely to make mistakes which resulted in impact destruction.
Where a concern is necessarily part of the narrative for the decision or it feels like it’s very important and can easily be shared with confidence, I think we have. But generally it’s not necessary for the argument, and we stick to the default policy.
I really appreciate your recognition of this—really positive!
“it’s hard to publish critiques of organizations or the work of particular people without harming someone’s reputation or otherwise posing a risk to the careers of the people involved. I also agree with you that it’s useful to find ways to talk about risks and reservations. One potential solution is to talk about the issues in an anonymized, aggregate manner.”
Anonymized, aggregate thoughts sound like the perfect solution, and thanks for the pointers!
Update: The current LTFF AMA elaborates on common reasons for rejecting applications to some degree.
This comment is not intended to detract from the work that WANBAM has done (it’s partly premised on the assumption that their mentoring work has likely been valuable to the individual people receiving mentorship): what would your views be about funding an EA mentoring program that was open to male EAs?
The case for such an initiative being extremely valuable seems very strong on the face of it. This is based on the assumption that mentoring is very valuable to individuals (this may have collective benefits if it makes them more impactful) and that there are many people who would benefit from mentoring and can’t access it who are men. Both of those assumptions seem uncontroversially true. I would not be surprised if extending mentoring to more EAs paid for itself several times over and the Meta Fund does not seem particularly funding constrained. Adjudicating how this compares to other initiatives would depend on more controversial questions, especially if mentoring time is a scarce resource which can only be allocated to a somewhat fixed number of individuals, but it seems worth reflecting about.
One reason why this seems worth discussing explicitly is that I think that many people would be afraid to pitch a mentoring scheme that was open to men given that WANBAM exists (as I am somewhat afraid to make this comment) in case anyone infers any nefarious motivation behind it.
FWIW, I find this very surprising, and like Denise personally have the opposite intuition.
(What I would be hesitant to do—but not because I’m afraid but because I think it’s a bad idea—is to pitch a mentoring scheme that explicitly emphasizes or discusses at length that it’s open to men, or any other audience which is normally included and would be odd to single out.)
In general, the default for most things is that they’re open to men, and I struggle to think of examples where the mere existence of an opportunity with this property has been controversial. (This is different from suggesting that specific existing opportunities should be open to men, or bringing up this topic in contexts where people are trying to discuss issues specific to other audiences. These can be controversial, but I think often for good and very mundane reasons.)
It’s also worth noting that a lot of mentoring happens outside of explicitly designed mentorship schemes. For example, as part of my job, I’m currently mentoring or advising six people, five of which happen to be male. And personally I’ve e.g. benefitted from countless informal conversations about my career.
In fact, hopefully any kind of work together with more experienced people includes aspects of mentorship. Mentoring seems such a ubiquitous aspect of work relationships that it makes a lot of sense to me that specific mentorship schemes will tend to focus on gaps in the existing landscapes, e.g. aspects of mentorship not usually provided in the workplace or mentorship on issues specific to certain audiences.
Specific mentorship schemes thus only represent a tiny fraction of the total mentoring that’s happening. As a consequence, I think the fact that some or all of them are only open to specific audiences or focus on specific kinds of mentorship is poor evidence for imbalances in the mentorship landscape at large. (Except perhaps indirectly, i.e. the fact that someone thought it’s a good idea to start a scheme focused on X suggests there was a gap in X.)
It is true that most things are open to men, in the sense that (at least in the west) most careers, associations and organisations are open to both men and women. But it seems definitely the case that it is much more common to exclude men from something than to exclude women. So if your principle opposing emphasizing the acceptance of widely-accepted groups was commonly held, it would actually oppose the existance of an EA mentoring group specifically for women.
Consider some examples from high status organizations:
* At Harvard there are 21 non-sport clubs dedicated for women—they get a special section on the website. In contrast, the only such club I can think of for men was the Black Man Forum.
* Goldman has a woman’s network, but no men’s network.
* Mckinsey has a women’s network, but no men’s network
* The Democratic Party Platform has multiple sections dedicated to women, but none to men. The Republican Party Platform is not really organized into sections, but a similar principle applies at the content level, to a lesser degree.
* The Department of Labor has a women’s bureau, but no men’s bureau.
* Girls are allowed to join Scouts now, but boys are not allowed to join Guides.
This is I think basically because advocating for men in general is viewed as very low status, whereas advocating for women in general is high status. Consider the differing levels of respect that Mens Rights Advocates are held in vs Feminists. Indeed, Robin Hanson, who has been very influential on many EA topics, was recently deplatformed from an EA group, after consultation with CEA, because of a smear campaign resulting from his advocacy with regard male-effecting suffering. Even if this was the right decision, I think it is clear that he would not have been treated so had he instead been raising awareness of female suffering.
In light of this I think the grandparent’s caution makes perfect sense: given there is already a women’s group, pitching a group that was open to men would only benefit men, and this sort of advocacy is at best viewed as cringe-worthy and low status, to at worst a cancelable offense.
It is also quite possible that a more inclusive mentoring group might undermine the women’s mentoring one. Consider the case of the female-only universities. In the old days Bryn Mawr had extremely high quality students, because the top women had few alternatives; but since they gained the option to go to Harvard, Bryn Mawr has declined dramatically. A similar thing might happen here: if there was a universal mentoring group that gave women access to both male and female mentors, why would they choose the segregated group that restricted them to a subset of mentors?
Thanks for the pushback. I think my above comment was in parts quite terse, and in particular the “odd” in “would be odd to single out” does a lot of work.
So yes, it agrees with my impression that in a reference class of explicit formalized groups similar to those you mentioned it’s more common for men to be excluded than for women to be excluded. The landscape is too diverse to make confident claims about all of it, but I think in most cases I’d basically think it isn’t odd to explicitly single out women as target audience while it would be odd to explicitly single out men.
I suspect it would require a longer conversation to hash out what determines my assessments of ‘oddness’ and how appropriate they are relative to various goals one might have. Very briefly, some inputs are whether there was a history of different treatment of some audience, whether that audience still faces specific obstacles, has specific experiences or specific needs, and whether there are imbalances in existing informal groups (e.g. similar to the above point on mentoring being ubiquitous surely a lot of informal networking happens at McKinsey).
I think this kind of reasoning is fairly standard and also explains many instances of target audience restriction and specialization other than the ones we’ve been discussing here. For example, consider the Veterans Administration in the US or Alcoholics Anonymous.
I think I don’t want to go into much more depth here, partly because it would be a lot of work, partly because I think it would be a quite wide-ranging discussion that would be off-topic here (and possibly the EA Forum in general). I appreciate this may be frustrating, and if you think it would be important or very helpful to you to understand my views in more detail I’d be happy to have a conversation elsewhere (e.g. send me a PM and we can find a time to call).
FWIW, while I suspect we have a lot of underlying disagreements in this area, I’ve appreciated your pushback against orthodox liberal views in other discussions on this forum, and I’m sorry that your comment here was downvoted.
Small point that’s not central to your argument:
I had actually also asked WANBAM at some point whether they considered adding male mentors as well but for different reasons.
I think at least some women would still prefer female mentors. Anecdotally, I often made the experience that it’s easier for other women to relate to some of my work-related struggles and that it’s generally easier for me to discuss those struggles with women. This is definitely not true in every case but the hit rate (of connections where talking about work-struggles works really well) among women is much higher than among men and I expect this to be true for many other women as well.
That makes perfect sense to me. But a co-ed mentoring group would presumably be able to offer female mentors to those who wanted them, leaving it equally good for those who preferred women and superior for those who were open-minded or preferred men. I guess some women might be too shy to specify “and I would like a women” in a mixed group, so having WANBAM allows them to satisfy their preference more discretely.
In short: yes, we are open to funding other mentorship programmes, including ones open to men.
I would be pretty sad if people felt less motivated to start a mentorship programme because we already funded another. I am hoping for the opposite effect. I agree that mentorship is very valuable.
My intuition is that people consider us willing to fund a project with one target audience as positive evidence for us willing to fund a similar project with a different target audience, as it provides proof of concept that we are willing to fund such projects in principle. For example, we have funded fundraising & tax-deductibility initiatives in different countries so far and we keep seeing applications for them.
If someone wants to start a mentorship programme with a different target audience to WANBAM, I am keen for them to apply to the Meta Fund.
I agree with Denise. Although it’s worth noting that our bar for a mentorship program worth funding does have to be quite high.
I am not the meta fund, but I’d be excited to see a variety of quality mentoring schemes in EA with different goals (just like I’d be excited to see a variety of career coaching organisations and charity evaluators)
Hi, I am the CEO of WANBAM. I would be delighted to welcome and support emerging EA mentorship programs with lessons learnt and advice. You can reach me at eamentorshipprogram@gmail.com. One of the things I love about WANBAM is we are experimenting with what works (and doesn’t)! I hope as we dial it in we will add value to emerging projects with lots of different communities and purposes. Be in touch! :)
Thank you for the excellent write up. And thank you for all the good work you do.
My gut reaction to this post is that the Future of Humanity Foundation feels like the kind of project I’d expect to come under the Long Term Future Fund rather than the EA Meta Fund.
I would be curious to hear more about how the meta fund decides on projects that are meta in scope but cause area specific. (How do such grants align with donors’ expectations? Do the grantmakers have expertise in domain specific issues? Is there an attempt to balance across cause areas? Etc)
I think FHF can be argued to fall within the scope of either fund. I’m sure you saw this part of the above report:
I perceive this grant to be worldview-specific rather than cause-area-specific: there are several longtermist cause areas (AI safety, pandemic prevention, etc.) that FHI contributes to. Other grants (e.g., Happier Lives Institute, Charity Entrepreneurship) are also based on particular worldviews or even cause areas, so this is not unprecedented.
In general, I think it makes sense for the EA Infrastructure Fund (EAIF) to support both cause-neutral and cause-specific projects, as long as they have a meta component and the EAIF fund managers are well-placed to evaluate the projects.
I personally actually think it’s pretty unclear what the EAIF’s funding threshold and benchmark should be. The GHDF aims to beat GiveWell top charities, the AWF should match/beat OP’s animal welfare grantmaking, the LTFF aims to beat OP’s last longermist dollar, but there’s no straightforward benchmark for the EAIF given that it’s kind of cause-agnostic. I plan to work with the fund managers to define this more clearly going forward. Let me know if you have any ideas.
Thanks for this write-up! Sounds like a bunch of cool projects.
Do you mean that over $19m has been given to high-impact charities FP recommends by people FP talked to, but $11m might have been given to the same places anyway? That would seem to suggest a surprisingly high proportion of these people would’ve given anyway, and to the same places.
Or do you mean that the total amount given by anyone to all charities FP recommends is (presumably) around $19-20m? That would seem surprisingly little total donations, given the number of charities FP recommends, and that this is over a span of 5 years. And then that’d imply FP influenced about 40% of that total, which seems a surprisingly high proportion.
Also, do you actually just mean “high-impact charities”, or high-impact organisations/recipients more broadly? I ask because I believe some of the orgs FP recommends aren’t actually charities (e.g., in the existential risk area), though I could be wrong about that.
FP aren’t a straight forward advisory group, they have a pledge and a community, so the $19m is the total to high-impact charities within their pledger community. FP’s research team have attempted to estimate which of those donations happened as a result of FP advisory / marketing work, which is hard, and as with any self-reporting, open to becoming a KPI that ends up drifting and becoming misreported. My current view of the FP individuals that did this estimate work though is that they have high intellectual honesty and thoroughness, that they are aware of their own misincentives and when I spot-checked a number of their figures in 2018-19 they were good estimates, perhaps even on the conservative side.
Ok, so it’s that the people who’ve taken FP‘s pledge have given an estimated >$19m over 5 years to high-impact charities (which includes e.g. charities that GiveWell recommends but FP doesn’t recommend in its cause are reports), and FP estimates it influenced whether or where ~$8m of that was donated?
That makes more sense than either of the things I guessed the sentence meant. Thanks for clarifying :)
High-impact, for simplicity, (they have a very large total number of grants) is set just as the rough status quo of groups on GiveWell, funded by Open Phil, ACE charities etc., FP manage their own list and we >90% are in agreement on what is in that list. None of the largest grants in the list are groups we feel conflicted about.
In an ideal world we would of course evaluate every group their pledgers have counterfactually funded but that’s not really tractable. And we try to only use their quantitative outcomes as one of several signals as to how well they’re doing (it’s very tempting to fall into a rabbit hole of data analysis for a group with such clear and measurable first order outcomes)
I think it’s the former of the two. Regarding the last paragraph, I think this refers to high-impact recipients (I think mostly or exclusively charities). But someone from the Meta Fund could answer these questions in more detail.