LTFF and EAIF are unusually funding-constrained right now
Including only money that has already landed in our bank account and extremely credible donor promises of funding, LTFF has raised ~1.1M and EAIF has raised ~500K. After Open Phil matching, this means LTFF now has ~3.3M additional funding and EAIF has ~1.5m in additional funding.
From my (Linch)’s perspective, this means both LTFF nor EAIF are no longer very funding constrained for the time period we wanted to raise money for (the next ~6 months), however both funds are still funding constrained and can productively make good grants with additional funding.
See this comment for more details.
EA Funds aims to empower thoughtful individuals and small groups to carry out altruistically impactful projects—in particular, enabling and accelerating small/medium-sized projects (with grants <$300K). We are looking to increase our level of independence from other actors within the EA and longtermist funding landscape and are seeking to raise ~$2.7M for the Long-Term Future Fund and ~$1.7M for the EA Infrastructure Fund (~$4.4M total) over the next six months.
Why donate to EA Funds? EA Funds is the largest funder of small projects in the longtermist and EA infrastructure spaces, and has had a solid operational track record of giving out hundreds of high-quality grants a year to individuals and small projects. We believe that we’re well-placed to fill the role of a significant independent grantmaker, because of a combination of our track record, our historical role in this position, and the quality of our fund managers.
Why now? We think now is an unusually good time to donate to us, as a) we have an unexpectedly large funding shortage, b) there are great projects on the margin that we can’t currently fund, and c) more stabilized funding now can give us time to try to find large individual and institutional donors to cover future funding needs.
Importantly, Open Philanthropy is no longer providing a guaranteed amount of funding to us and instead will move over to a (temporary) model of matching our funds 2:1 ($2 from them for every $1 from you, up to 3.5M from them per fund).
Some relevant quotes from fund managers:
I think the next $1.3M in donations to the LTFF (430k pre-matching) are among the best historical grant opportunities in the time that I have been active as a grantmaker. If you are undecided between donating to us right now vs. December, my sense is now is substantially better, since I expect more and larger funders to step in by then, while we have a substantial number of time-sensitive opportunities right now that will likely go unfunded.
I myself have a bunch of reservations about the LTFF and am unsure about its future trajectory, and so haven’t been fundraising publicly, and I am honestly unsure about the value of more than ~$2M, but my sense is that we have a bunch of grants in the pipeline right now that are blocked on lack of funding that I can evaluate pretty directly, and that those seem like quite solid funding opportunities to me (some of this is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality).
“My main takeaway from [evaluating a batch of AI safety applications on LTFF] is [LTFF] could sure use an extra $2-3m in funding, I want to fund like, 1/3-1/2 of the projects I looked at.” (At the current level of funding, we’re on track to fund a much lower proportion).
Asya Bergal’s Reflections on my time on the Long-Term Future Fund
We think there is a significant shortage of independent funders in the current longtermist and EA infrastructure landscape, resulting in fewer outstanding projects receiving funding than is good for the world. Currently, the primary source of funding for these projects is Open Philanthropy, and whilst we share a lot of common ground, we think we add value in the following ways:
Increasing the total grantmaking capacity within key cause areas.
Causing great projects to counterfactually happen in the world, or saving time and effort for people doing great projects who would otherwise spend significant time fundraising or waiting for grants to come in.
Supporting a set of worldviews that we find plausible and that are not currently well represented among grantmakers (though we have substantial overlap with Open Philanthropy’s worldview and there is a range of views on how much we should be directly optimizing for diversification away from their perspectives).
Emphasizing contact with reality: most of our grantmakers spend most of their time trying to directly solve problems of importance within their cause area, rather than engaging in “meta” activities like grantmaking. We think this is important as grantmaking often has very poor feedback loops (particulalry longtermist grantmaking).
Provide early stage funding to allow applicants to test their fit for work and “get ready” to seek funding from other funders that specialize in larger grant sizes.
Improving the epistemic environment within EA by making it easier for smaller projects to disagree with Open Philanthropy without worrying that this will significantly reduce their chance of being funded in the future.
Helping to identify harmful projects whilst being aware of factors such as the unilateralist curse and information cascades.
Increasing the resilience, robustness and diversity of funders within EA and longtermism.
Alongside the above, EA Funds has ambitions to pursue new ways of generating value by:
Creating an expert-led active grant-making program to create counterfactual impactful projects (starting with longtermist information security).
Modeling and shaping community norms of transparency, integrity, and criticism to improve the epistemic environment within EA and associated communities.
We are looking to raise ~$4.4M from the general public to support our work over the next 6 months:
~$2.7M for the Long-Term Future Fund.
This is ~2M above our expected 720k donations in the next 6 months.
~$1.7m for the EA Infrastructure Fund.
This is ~1.3M above our expected 400k donations in the next 6 months.
This will be matched by Open Phil at a 2:1 rate ($2 from Open Phil per $1 donated to a fund) with a ceiling of a $3.5m contribution from Open Phil (per fund). You can read more about the matching here.
The EAIF and LTFF have received very generous donations from many individuals in the EA community. However, donations to the EAIF and LTFF have recently been quite low, especially relative to the quality and quantity of applications we’ve had in the last year. While much of this is likely due to the FTX crash and subsequently increased funding gaps of other longtermist organizations, our guess is that this is partially due to tech stocks and crypto doing poorly in the last year (though we hope that recent market trends will bring back some donors).
Calculation for LTFF funding gap
The LTFF has an estimated ideal dispersal rate of $1M/month, based on our post-November 2022 funding bar that Asya estimated from looking at the funding gaps and marginal resources within the longtermist ecosystem overall. This is $6M over the next 6 months.
I also think LTFF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, LTFF would like to spend $6.2M over the next 6 months.
Caleb estimated ~$700k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $2.1M in expected incoming funds, or a shortfall of $4.1M.
To cover the remaining $4.1M, we would like individual donors to contribute an additional $2M, where Open Phil will provide $2.1M of matching for the first $1.05M.
To get a sense of what projects your marginal dollars can buy, you might find it helpful to look at the $5M tier of the LTFF Funding Thresholds Post.
Calculation for EAIF funding gap
The EAIF has an estimated ideal dispersal rate of $800k/month, based on the proportion of our historic spend rate that we believe is above Open Phil’s bar for EA community building projects (though note that this was based on fairly brief input from Open Phil and I didn’t check with them about whether they agree with this claim). This is $4.8M over the next 6 months.
I also think EAIF donors should pay $200k over the next 6 months ($400k annualized) as their “fair share” of EA Funds operational costs. So in total, EAIF would like to spend $5M over the next 6 months.
Caleb estimated $400k in expected donations from individuals by default in the next 6 months, based solely on extrapolation from past trends. With Open Phil donation matching, this comes out to a total of $1.2M in expected incoming funds, or a shortfall of $3.8M.
To cover the remaining $3.8M, we would like individual donors to contribute an additional $1.3M, where Open Phil will provide 2.5M in donation matching.
Potential change for operational expenses payment
Going forwards, we would also like to move towards a model where donors directly pay for our operational expenses (currently we fundraise for operational expenses separately, so 100% of donations from public donors goes to our grantees). We believe that the newer model is more transparent, as it lets all donors more clearly see the true costs and cost-benefit ratio for their donations. However, making the change is still pending internal discussions, community feedback, and logistical details. We will make a separate announcement if and when we switch to a model where a percentage of public donations go to cover our operational expenses. See Appendix A for a calculation of operational expenses.
Why give to EA Funds?
We think EA Funds is well-positioned to be a significant independent grantmaker for the following reasons.
We have knowledgeable part-time fund managers who do direct work in their day jobs: we have built several grantmaking teams with a broad range of expertise. These managers usually dedicate the majority of their time to hands-on efforts addressing critical issues. We believe this direct experience enhances their judgment as grantmakers, enabling them to pinpoint important and critical projects with high accuracy.
Specialization in early-stage grants: we made over 300 grants of under $300k in 2022. To our knowledge, that’s more grants of this size than any other EA-associated funder.
We are the largest open application funding source (that we are aware of) within our cause areas. Our application form is always open, anyone can apply, and grantees can apply for a wide variety of projects relevant to our funds’ purposes (as opposed to e.g. needing to cater to narrow requests for proposals). We believe this is critical to us having access to grant opportunities that other funders do not have access to, allowing us to rely on formal channels rather than informal networks.
Our operational track record. In 2022, EA Funds paid out ~$35M across its four Funds, with $12M to the Long-Term Future Fund, $13M to the EA Infrastructure Fund, $6.4M to the Animal Welfare Fund, and $4.8M to the Global Health and Development Fund. This requires (among others) clearing nontrivial logistical hurdles in following nonprofit law across multiple countries, consistent operational capacity, and a careful eye towards downside risk mitigation.
We believe our grants are highly cost-effective. Our current best guess is that we have successfully identified and given out grants of similar ex-ante quality to (e.g.) Open Phil’s AI safety and community building grants, some of which Open Phil would counterfactually not have funded. This gives donors an opportunity to provide considerable value.
We are investigating new value streams. We would like to pursue ‘DARPA-style’ active grantmaking in priority areas (starting with information security). We are also actively considering setting up an AI Safety-specific fund, encouraging donors interested in AI safety (but not EA or longtermism) to donate to projects that mitigate large-scale globally catastrophic AI risks.
We are one of the main public longtermist donation options available for individual donors to support. We believe that we are a relatively transparent funder, and we are currently thinking about how we can increase our transparency further whilst moving more quickly and maintaining our current standard of decision-making.
We are primarily looking for funding to support the Long-Term Future Fund and the EA Infrastructure Fund’s grantmaking.
The Long-Term Future Fund is primarily focused on reducing catastrophic risks from advanced artificial intelligence and biotechnology, as well as building and equipping a community of people focused on safeguarding humanity’s future potential. The EA Infrastructure Fund is focused on increasing the impact of projects that use the principles of effective altruism, in particular amplifying the efforts of people who aim to do an ambitious amount of good from an impartial welfarist and scope-sensitive perspective. We have included some examples of grants each fund has made in the highlighted grants section.
Our Fund Managers
We lean heavily on the experience and judgement of our fund managers. We have around five fund managers on each fund at any given time. Our current fund managers include:
Linchuan Zhang (LTFF): Linchuan (Linch) Zhang is a Senior Researcher at Rethink Priorities working on existential security research. Before joining RP, he worked on time-sensitive forecasting projects around COVID-19. Previously, he programmed for Impossible Foods and Google and has led several EA local groups.
Oliver Habryka (LTFF): Oliver runs Lightcone Infrastructure, whose main product is Lesswrong. Lesswrong has significantly influenced conversations around rationality and AGI risk, and the LWits community is often credited with having realized the importance of topics such as AGI (and AGI risk), COVID-19, existential risk and crypto much earlier than other comparable communities.
Peter Wildeford (EAIF): co-executive director and co-founder of Rethink Priorities, a think tank dedicated to figuring out the best ways to make the world a better place.
Guest Fund Managers
Daniel Eth (LTFF): Daniel’s research has spanned several areas relevant to longtermism, and he’s currently focused primarily on AI governance. He was previously a Senior Research Scholar at the Future of Humanity Institute, and he has a PhD in Materials Science and Engineering from UCLA. He is currently self-employed.
Lauro Langosco (LTFF): Lauro is a PhD student with David Krueger at the University of Cambridge. His work focused broadly on AI Safety, in particular on demonstrations of alignment failures, forecasting AI capabilities, and scalable AI oversight.
Lawrence Chan (LTFF): Lawrence is a researcher at ARC Evals, working on safety standards for AI companies. Before joining ARC Evals, he worked at Redwood Research and as a PhD Student at the Center for Human Compatible AI at UC Berkeley.
Thomas Larsen (LTFF): Thomas was an alignment research contractor at MIRI, and he is currently running the Center for AI Policy, where he works on AI governance research and advocacy.
Clara Collier (LTFF): Clara is the managing editor of Asterisk, a quarterly journal focused on communicating insights on important issues. Before, she worked as an independent researcher on existential risks. She has a Masters in Modern Languages from Oxford.
Michael Aird (EAIF): Michael Aird is a Senior Research Manager in Rethink Priorities’ AI Governance and Strategy team. He also serves as an advisor to organizations such as Training for Good and is an affiliate of the Centre for the Governance of AI. His prior work includes positions at the Center on Long-Term Risk and the Future of Humanity Institute.
Huw Thomas (EAIF): Huw is currently working part-time on various projects (including a contractor role at 80,000 hours). Prior to this, he worked as a media associate at Longview Philanthropy, a groups associate at the Centre for Effective Altruism and was a recipient of the CEA Community Building Grant for his work at Effective Altruism Oxford.
If you have more questions, feel free to leave a comment here. Caleb Parikh and the fund managers are also happy to talk to donors potentially willing to give >$30k. Linch Zhang, in particular, has volunteered himself to talk about the LTFF.
EA Funds has identified a variety of high-impact projects, at least some of which we think are unlikely to have been funded elsewhere. (However, for any specific grant listed below, we think there’s a fairly high probability they’d otherwise be funded in some form or another; figuring out counterfactuals is often hard).
From the Long-Term Future Fund:
David Krueger - $200,000
Computing resources and researcher stipends at a new deep learning + AI alignment research group at the University of Cambridge.
Alignment Research Center - $72,000
A research & networking retreat for winners of the Eliciting Latent Knowledge contest with the aim of fostering promising research collaborations between junior researchers.
SERI MATS program - $316,000
8-week scholars program to pair promising alignment researchers with renowned mentors. This program has now grown into a more established program producing multiple people working full-time on alignment in established research organizations (with a smaller number of people pursuing independent research or starting new organizations).
Manifold Markets - $200,000
Stipend and expenses for 4 months for 3 FTE to build a forecasting platform made available to the public based on user-created play-money prediction markets
Daniel Filan - $23,544
We recommended a grant of $23,544 to pay Daniel Filan for his time making 12 additional episodes of the AI X-risk Research Podcast (AXRP), as well as the costs of hosting, editing, and transcription.
From the EA Infrastructure Fund:
Shauna Kravec & Nova DasSarma - $50,000:
Compute infrastructure and dedicated support for AI safety researchers to run technical AI experiments. This later became Hofvarpnir Studios which used to provide compute for Jacob Steinhardt’s lab at UC Berkeley and the Center for Human-Compatible Artificial Intelligence (CHAI).
Finlay Moorhouse and Luca Righetti - $38,200
Ongoing support for “Hear This Idea”, a podcast showcasing new thinking in effective altruism.
Laura Gonzalez Salmerón, Sandra Malagón - $43,308
12-month stipend to coordinate and grow the EA Spanish speakers community and its projects.
Czech Association for Effective Altruism - $ 8,300
Expenses and stipend to create a short Czech book (~130 pgs) and brochure (~20 pgs) with a good introduction to EA in digital and print formats.
Planned actions over the next six months
To achieve our goals of empowering thoughtful people to pursue impactful projects, we’ll attempt to do the following:
Asya Bergal will step down as chair of LTFF (Max Daniel has already stepped down as chair of the EAIF). Max and Asya both work for Open Phil, and we want to increase our separation from Open Phil. 
Open Phil also wanted to reduce entanglements between the two organizations, in part to mitigate downside reputational risks.
We are looking to find new fund chairs for both LTFF and EAIF.
We plan to onboard more fund managers to grow each fund substantially (aiming to double the staffing of each fund).
In recent months, LTFF has onboarded Lauro Langosco and Lawrence Chan who will primarily focus on technical alignment grantmaking, as well as Clara Collier for her expertise in communications and general longtermism. The EAIF is in the process of onboarding new fund managers.
Open Phil has agreed to give us a 2:1 match for up to $7M total (up to $3.5M to each of EAIF and LTFF) for a 6-month period. While our ultimate goal is to develop our own robust funding base, in 2022, Open Philanthropy provided 40% of the funding for the Long-Term Future Fund and 84% for the EA Infrastructure Fund. We see donation matching as a realistic intermediary step while enabling us to pursue more intellectual independence.
This model replaces fixed grants from Open Philanthropy. This reduces the likelihood of your donations being fungible: previously an extra $1 to EA Funds in fundraising could result in a $1 reduction in Open Philanthropy’s grants to us, diverting those funds to their other projects. This newer approach allows funders to donate to EA Funds and support the specific value proposition that we, as opposed to Open Philanthropy, present. 
We are considering hiring or contracting out more non-grantmaking duties (eg website, project management, fundraising, communications) at EA Funds. Right now Caleb is the only full-time employee of EA Funds and plausibly having 0.5-1.5 more FTEs at EA Funds will help both existing projects go more smoothly, as well as unlock new ambitious opportunities.
We are working with external investigators to do retroactive evaluations of past EAIF and LTFF grants, with the hopes that we can then have a clearer picture of a) how well the impact of our past grants compares to e.g., Open Phil’s, b) which of our broader categories of historical grants have been the most impactful, and c) other qualitative insights to help us improve further.
We aim to improve the operations of our passive grantmaking (funding of open grant applications) program with a focus on improving the grantee experience by providing more support to grantees and getting back to grantees much more quickly
We are trying to reconceptualize and reframe the value proposition and strategic direction of EAIF in the coming months. While much of this will be contingent on the vision of the incoming fund chair, we’d like EAIF to have a more coherent and targeted vision, strategy, and coherent value proposition to donors going forwards.
We plan to create a new AI Safety specific program, for donors outside of EA/Longtermism who want to decrease catastrophic risks from AI. We hope that such a program can inspire new donors to give to AI safety projects.
EA Funds is pursuing active grant-making programs, where we’ll actively seek out promising projects to fund. We’ll initially focus on Information Security field building. The current plan is for this program to initially be funded by Open Philanthropy, though if you are interested in contributing to this program in particular, please let us know.
Potential negatives to be aware of
Here are some reasons you might not want to donate to EA Funds:
Potential downside risks of LTFF or EAIF
Inability to fully screen for or prevent unilateral downside risks: EA Funds has much less control over and offers less guidance to our grantees than, e.g., the executive directors of a moderately-sized EA organization. So compared to larger organizations, we may be less able to prevent unilateral downside risks like the sharing of information hazards, or actions that pose reputational risks to effective altruism at large, or to specific EA subfields.
Centralization of funds: In contrast, we are also implicitly asking for the centralization of funds from private donors to a single grantmaking entity. To the extent that you believe your counterfactual for donating to EA Funds is better and/or more centralization is bad, you may wish to donate directly rather than pool your funds with other LTFF or EAIF donors.
Waste/Inefficient usage of human capital: Giving money to EA Funds rather than larger organizations implicitly subsidizes a culture and community of grantseekers who are supported by small grants. To the extent that you believe this is a less efficient usage of human capital than plausible counterfactuals for talented people (e.g. getting a job in tech, policy, or academia), you might want to shift away from EA grantmakers that give relatively small individual grants.
Note that we consider these issues to be structural and do not realistically expect resolutions to these downside risks going forwards.
Areas of improvement for the LTFF and EAIF
Historically, we’ve had the following (hopefully fixable) problems:
Slower than ideal response times: in the past year, our median response time has been around 4 weeks with high variance; we’d like to get this down to closer to 2 weeks with 95% of applications responded to in 4 four weeks.
Limited feedback/advice given to grantees: we generally don’t give feedback to rejected applicants. We currently give some feedback to promising grantees but much less than we’d give if we had more grantmaking capacity.
Insufficient active grantmaking: We spend some time trying to improve our grantees’ projects, but we have invested fairly little in active grantmaking (actively identifying promising projects and creating/supporting them).
Missing areas of subject matter expertise: The scopes of both funds are quite expansive. This means sometimes all of the existing grantmakers lack sufficient direct technical subject matter expertise to evaluate grants in certain areas, and thus have to rely on external experts. For example, the LTFF does not currently have a technical expert in biosecurity.
For more, you can read Asya’s reflections on her time as chair of LTFF.
EAIF vs LTFF
Some donors are interested in giving to both the EAIF and LTFF and would like advice on which fund is a better fit for them.
We think that the EAIF is a better fit for donors who:
Are interested in supporting a portfolio of meta projects covering a range of plausible worldviews (both longtermist and non-longtermist).
Are interested in building EA and adjacent communities.
Believe that EA (and EA community building) has historically been very good for the world.
Believe in multiplier effect arguments (donating $100 to an EA group could plausibly create far more than $100 in donation to high-impact charities by encouraging more people to donate).
Expect the EAIF and LTFF to have similar diminishing marginal returns curves and want to donate to the fund with lower funding. (EAIF and LTFF each receive about 1000 grant applications per year, but EAIF has less funding currently committed)
We think that the LTFF is a better fit for donors who are:
More compelled by longtermist cause areas than other EA cause areas.
Particularly interested in AI safety.
Are more interested in direct work than “meta” work that have a longer chain of impact/reasoning.
Are more excited about the $5M tier of marginal LTFF grants than what they consider to be the marginal EAIF grant.
This post was written by Caleb Parikh and Linch Zhang. Feel free to ask questions or give us feedback in the comments below.
If you are interested in donating to either LTFF or EAIF, you can do so here.
Appendix A: Operational expenses calculations and transparency.
In the last year, EA Funds has dispersed $35M and spent ~700k in operational expenses. The vast majority of the operational expenses were spent on LTFF and EAIF, as the global health and development fund and animal welfare fund are operationally much simpler.
Historically, ~60-80% of the operational expenses are paid to EV Ops, for grant disbursement, tech, legal, other ops, etc.
The remaining 20-40% is used for:
Caleb’s salary, who leads EA Funds (~$100k/year plus benefits).
Payments for grantmakers at $60/hour, though many volunteer for free.
Contractors who work on different projects, earning between $35-$100/hour.
I (Linch) ballparked the expected annual expenditures going forwards (assuming no cutbacks) to be ~800k annually. I estimated the increase due to a) inflation and b) us wanting to take on more projects, with some savings from us slowing down the rate of dispersals a little. But this estimate is not exact.
Since LTFF and EAIF incur the highest expenses, I suggest donors to each should contribute around $400k yearly, or $200k every six months.
As for where we might cut or increase spending:
Reducing EV Ops costs would be challenging and may require moving EA Funds out of EV and building our own grant ops team.
Reducing Caleb’s working hours would be challenging.
I think my own hours at EAF are somewhat contingent on operational funding. In the last month, I’ve been spending more than half of my working hours on EA Funds (EA Funds is buying out my time at RP), mostly helping Caleb with communications and strategic direction. I will like to continue doing this until I believe EA Funds is in a good state (or we decide to discontinue or sunset projects I’m involved in). Obviously whether there is enough budget to pay for my time is a crux for whether I should continue here.
Assuming we can pay for my time, other plausible uses of marginal operational funding include: a) whether we pay external investigators for extensive or just shallow retroactive evaluations, b) whether we attempt to launch new programs, c) whether the new infosec, AI safety project, etc websites have professional designers, etc. My personal view is that marginal spending on EA Funds expenses is quite impactful relative to other possible donations, but I understand if donors do not feel the same way and they will prefer a higher percentage of donations go directly to our grantees (currently it’s 100% but proposed changes may move this to ~ 94-97%).
The Long-Term Future Fund and the EA Infrastructure Fund are part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA Inc. (“EV US”). Donations to LTFF and EAIF are donations to EV US or EV UK. Effective Ventures Foundation (UK) (EV UK) is a charity in England and Wales (with registered charity number 1149828, registered company number 07962181, and is also a Netherlands registered tax-deductible entity ANBI 825776867). Effective Ventures Foundation USA Inc. (EV US) is a section 501(c)(3) organization in the USA (EIN 47-1988398). Please see important state disclosures here.
Note that our current funding bar is higher as a result of anticipated funding/liquidity shortages.
This is a pretty loose statement partially because impact evaluation is quite hard in the fields we work in, and partially due to insufficient time investment in our evaluations. We are working with external investigators to establish better metrics and to have external retrospective evaluations. Potential cruxes for the value of our work (relative to larger entities like Open Phil) includes the value of independent researchers and small projects, and the value of having a wider range of longtermist worldviews.
This is generally a mix of experienced fund managers and less experienced assistant fund managers.
Though note that the current list is out of date.
I think that it’s useful to note that I don’t expect substantive world view shifts from making this change relative to our previous grantmaking. However, I think we will be a bit less likely to suffer from Open Phil correlated sources of error.
The donations to these funds only totaled $7.4m and $10m respectively, less than the total amount of grants disbursed that year.
Open Philanthropy is also on board with the aim of wanting the funding landscape to be more independent and for funders to be able to more legibly donate in non-fungible ways.
We aim to get back to 90% of grantees within three weeks, currently our median decision response time is 28 days.