Summary
Background
The FTX Foundation’s Future Fund publicly launched in late February. We’re a philanthropic fund that makes grants and investments to improve humanity’s long-term prospects. For information about some of the areas we’ve been funding, see our Areas of Interest page.
This is our first public update on the Future Fund’s grantmaking. The purpose of this post is to give an update on what we’ve done and what we’re learning about the funding models we’re testing. (It does not cover a range of other FTX Foundation activities.)
We’ve also published a new grants page and regrants page with our public grants so far.
Our focus on testing funding models
We are trying to learn as much as we can about how to deploy funding at scale to improve humanity’s long-term prospects. Our primary objective for 2022 is to perform bold and decisive tests of new funding models. The main funding models we have tested so far are our regranting program and our open call for applications.
In brief, these models worked as follows:
The basic idea of regranting was, “There are a lot of people who share our values and might know of great people or projects we could support that we wouldn’t know about by default. Let’s make it rewarding, simple, and fast for them to make grants. We’ll give them budgets of $100k to a few million to work with, and we’ll presumptively approve their recommendations (after screening for various risks/issues).”
The basic idea of the open call was, “Let’s tell people what we’re trying to do, what kinds of things we might be interested in funding, give them a lot of examples of projects they could launch, have an easy and fast application process, and then get the word out with Twitter blitz.” We wrote some about the review process here.
Our staff also made grants and investments that were not part of these programs (hereafter “staff-led grantmaking”).
Grantmaking by funding model
So far we have made 262 grants and investments, totaling ~$132M. These break down as follows:
Regranting: We have onboarded >100 regrantors (with discretionary budgets) and >50 grant recommenders (without discretionary budgets). We set aside >$100M for them to use over the course of our 6 month experiment (April-October 2022). So far, regrantors have made 168 grants and investments, totaling ~$31M
Open call: We received over 1700 applications and funded 69 (4%) of them, totaling ~$26M. (The acceptance rate for proposals focused squarely on our top priorities was much higher.)
Staff-led grantmaking: Separate from these programs, we have made 25 grants and investments otherwise sourced by our staff, totaling ~$73M.
There are also ~$25M of grants we are likely to make soon, but have some relevant aspects TBD.
Some example grants and investments
Below are some grants and investments that we find interesting and/or representative of what we are trying to fund.
Regranting
$1M investment in Manifold Markets to build a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
$490k for ML Safety Scholars Program to fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in AI safety.
We have funded >30 talent development and career transition grants that range from $1,450 to $175,000 depending on the duration and seniority level of the individual. Some examples include:
$42,600 to Andi Peng to support salary and compute for research on AI alignment.
$175,000 to Braden Leach to support a recent law school graduate to work on biosecurity, researching and writing at the Johns Hopkins Center for Health Security.
$37,500 to Thomas Kwa to support research on AI safety.
Open call
$1.2M for SecureBio to support the hiring of several key staff. The project is working to implement universal DNA synthesis screening, build a reliable early warning system, and coordinate the development of improved personal protective equipment and its delivery to essential workers when needed.
$300,000 to Ray Amjad to create a talent search organization which will help identify top young students around the world, and connect identified students with support and resources to work on issues relevant to improving humanity’s long-run prospects.
$250,000 to launch Apollo Academic Surveys to support Apollo’s work aggregating the views of academic experts in many different fields and making them freely available online.
$140,000 to Justin Mares to support research on the feasibility of inactivating viruses via electromagnetic radiation.
Staff-led
$15M to Longview Philanthropy to support Longview’s independent grantmaking on global priorities research, nuclear weapons policy, and other longtermist issues.
$5M to the Atlas Fellowship to support scholarships for talented and promising high school students to use towards educational opportunities and enrolling in a summer program.
$900,000 to the Effective Ideas Blog Prize, in collaboration with Longview Media, to support prizes for outstanding writing which encourages a broader public conversation around effective altruism and longtermism.
See this grants page and this regrants page for all of our public grants and investments so far, and further example grants in later sections.
Key stats
These numbers are for our grantmaking overall. (The sections below with more detail on regranting / open call / staff-led grants give the corresponding stats for each funding stream.)
Areas of interest
Area | Count | Volume |
262 | $132M | |
Artificial Intelligence | 76 | $20M |
Biorisk and Recovery from Catastrophe | 30 | $30M |
Economic Growth | 9 | $7M |
Effective Altruism | 61 | $34M |
Empowering Exceptional People | 18 | $10M |
Epistemic Institutions | 21 | $8M |
Great Power Relations | 6 | $2M |
Other | 17 | $16M |
Research That Can Help Us Improve | 4 | $1M |
Space Governance | 7 | <$1M |
Values and Reflective Processes | 13 | $3M |
Grant size
Count | Volume | |
262 | $132M | |
<$50k | 119 | $2M |
$50k - $500k | 102 | $20M |
>=$500k | 41 | $109M |
Some takeaways on funding models so far
While trying out these funding models, we’ve been trying to learn how cost-effective they are, how much of our team’s time (and others’ time) is required to operate them per unit benefit, how scalable they are, and whether they produce grants and investments that we otherwise wouldn’t have known about.
Below are some of the main things we’ve learned over the last couple of months as we’ve been trying out these funding models.
Regranting: This is the model we’re currently most excited about. We think regrantors are making pretty reasonable grants, many of the grants are opportunities we wouldn’t otherwise have known about, and it doesn’t take that much of our team’s time to make the grants. We like that regrantors are getting new projects launched and bringing in new people. Some regrantors also report having found the experience empowering, and we like that it focuses them on concrete things they can do/make happen/fund.
Open call: We thought this was fine but we’re less excited about this model right now. It generated some interesting grants that we wouldn’t otherwise have been aware of, and generated some ambitious proposals. But it took a lot of our team’s time and attention and we got fewer founders to launch new projects in our areas of interest than we had hoped. Our sense is we’re able to generate >2x more value per time with our other activities, but we plan to think more in the future about whether there is a way to make this process more efficient.
Staff-led grants: Coming in, our expectation was that there would be some low-hanging fruit to pick here, the grants would be pretty good, the best ones would largely get funded anyway, the funding stream wouldn’t be massively scalable, and that the return on time from these grants would be pretty good. Our experience has generally been pretty consistent with that.
Other smaller tests: Our other decentralized approaches to generating useful ideas—including our project ideas competition, our public “recommend a grant” form, our public “recommend a prize” form, and our “expression of interest” form—have provided less useful information than we hoped. We’ve discontinued these forms for now.
Other activities
Apart from the grantmaking described above, here are some of the other things going on.
We’re very excited that Avital Balwit has recently joined the Future Fund team, and we have also extended another offer to an additional team member.
We’ve drafted early versions of prizes that we might launch.
We’ve started planning and recruiting for a funder-initiated start-up.
Priorities for the rest of the year
We will continue the regranting program’s 6-month trial (until October) and staff-led grantmaking. We currently don’t plan to run another open call in the next couple months. We will revisit this when we have more capacity and find a way to run it more efficiently.
Consistent with our original plan for this year, our additional priorities for bold and decisive tests of new funding models include:
Tests of prizes
Experiments in proactively recruiting founders for projects we’d like to see launched
Separately, we will also more thoroughly estimate the expected cost-effectiveness of our grants and investments. We are working on a standardized process for this that will help us more robustly evaluate our programs.
Regranting program in more detail
Background
We launched a pilot version of the regranting program with 20 people in late February, and then scaled up the program to include >100 regrantors and >60 grant recommenders in early April. Our hope was to empower a range of interesting, ambitious, and altruistic people to drive funding decisions through a rewarding, low-friction process. We have set aside >$100M for this initial test, which will last until the end of September, at which point we will evaluate how it has gone and decide whether to continue it and what changes to make. As of mid-June, regrantors have made 168 grants and investments, totaling ~$31M.
The basic structure is that the regrantors have been given discretionary budgets ranging from $100k to a few million dollars. (A larger number towards the lower end, a smaller number towards the higher end—there is a wide variation in budget sizes.) Regrantors submit grant recommendations to the Future Fund, which we screen primarily for downsides, conflicts of interests, and community effects. We typically review and approve regranting submissions within 72 hours.
Grant recommenders have access to a streamlined grant recommendation form from us where we give them some deference, but they don’t have a discretionary budget. (We wanted to try out multiple versions, and in part randomized participation in the different programs.)
We compensate regrantors for the volume and quality of their grantmaking, including an element based on whether we fund the projects they seeded ourselves in the future. We also unlock additional discretionary funding when we’re ready to see more of what they’ve been doing.
Some example regrants
≥$500k grants
Some of the largest grants made so far include:
$5M for Ought’s work building Elicit, a language-model based research assistant. This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems.
$2M for the launch of the Swift Centre for Applied Forecasting, including salary for a director and a team of expert forecasters. They will forecast trends from Our World in Data charts, as well as other topics related to ensuring the long term future goes well, with a particular focus on explaining the “why” of forecast estimates.
$1M to the Federation for American Scientists to support a researcher and research assistant to work on high-skill immigration and AI policy for three years.
$1M investment in Manifold Markets to build a play-money prediction market platform. The platform is also experimenting with impact certificates and charity prediction markets.
$500k for the Public Editor Project to support a project to use a combination of human feedback and Machine Learning to label misinformation and reasoning errors in popular news articles.
$50k-$500k grants
Some example grants we found exciting from this category include:
Up to $490k for ML Safety Scholars Program fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety.
$200k for the Quantified Uncertainty Research Institute to develop a programming language called “Squiggle” as a tool for probabilistic estimation. The hope is this will be a useful tool for forecasting and fermi estimates.
$150k for Moncef Slaoui to fund the writing of Slaoui’s memoir, especially including his experience directing Operation Warp Speed.
Up to $250k for AI Impacts to support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results.
$100k for the EA Critiques and Red Teaming Prize to support prize money for a writing contest for critically engaging with theory or work in Effective Altruism. The goal of the contest is to produce thoughtful, action oriented critiques.
$50k for the Trojan Detection Challenge at NeurIPS 2022 to support prizes for a competition which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met.
<$50k grants
Some examples of grants that we found exciting in this category:
$30k for Adversarial Robustness Prizes at ECCV to support three prizes for the best papers on adversarial robustness research at a workshop at ECCV, the main fall computer vision conference. The best papers are selected to have higher relevance to long-term threat models than usual adversarial robustness papers.
$25k for J. Peter Scoblic to fund a nuclear risk expert to construct nuclear war-related forecasting questions and provide forecasts and explanations on key nuclear war questions.
>20 grants of $3-20k for promising university students to do summer research and study on projects relevant to the long-term future, primarily in AI safety.
Key stats
Areas of interest
Area | Count | Volume |
168 | $31M | |
Artificial Intelligence | 60 | $11M |
Biorisk and Recovery from Catastrophe | 11 | $1M |
Economic Growth | 5 | $5M |
Effective Altruism | 41 | $7M |
Empowering Exceptional People | 10 | $3M |
Epistemic Institutions | 12 | $4M |
Great Power Relations | 3 | <$1M |
Other | 12 | <$1M |
Research That Can Help Us Improve | 2 | <$1M |
Space Governance | 5 | <$1M |
Values and Reflective Processes | 7 | $1M |
Grant size
Count | Volume | |
168 | $31M | |
<$50k | 110 | $2M |
$50k - $500k | 47 | $7M |
>=$500k | 11 | $23M |
Expectations vs. reality
Some outcomes we were interested in, and thoughts on how they went:
Finding new promising things to fund that weren’t on our radar
Better than expected. A majority of our regrants seem like opportunities that we wouldn’t have been aware of by default. They also seem about as good as projects we’re funding through other mechanisms.
Launching new projects in our areas of interest
Promising signs, but too early to tell. We were quite unsure how many new projects we’d expect to see launched via the regranting program. The main update is that projects are getting launched and the founders look impressive based on their previous work. We haven’t seen enough of their new work yet (now much more closely related to our areas of interest) to say whether things are likely to go in a good direction.
Bringing in new people who weren’t on our radar and supporting them to work on our areas of interest
Promising signs, but too early to tell. A lot of movement here is coming from many <$50k grants to people who are learning/developing their skills in our areas of interest, career transition grants, as well as some of the larger projects being launched by founders that weren’t on our radar. We are tentatively excited about this.
Avoiding spending lots of money on things that seem wasteful
Better than expected, but too early to tell. Some of these grants may look clearly low-EV in retrospect, but few look that way to us now. One measure is that ~80% of grants (by dollar volume) are grants we probably would have been happy to make even if we weren’t extending significant deference to the regrantor.
Avoiding approving grants that seem ill-advised or net negative
Better than expected. Our screening process weeds out some grants that look harmful or otherwise inappropriate, but there haven’t been many we’d describe that way. Our process may also have weeded out worthwhile grants where we were unsure about downsides and we chose to proceed cautiously.
Avoiding interpersonal drama (over who was and wasn’t selected, what their discretionary budget was, and so on)
Better than expected. We haven’t had a lot of drama, though we were pretty careful to set things up to minimize that. (For example, by in part randomizing participation in the program and providing careful communication guidance to regrantors.)
Doing all of the above without a massive time commitment
About as expected (and well). Our team time spent per dollar moved is >2x lower than via our open call. We expect this to improve even further in the future because a lot of the time cost here was the fixed cost of designing and setting up the program.
Some more general reflections:
A key hope underlying these outcomes was the idea that regrantors and grant recommenders could exploit local knowledge and diverse networks to make promising projects move forward that we might not have known about or had time to investigate ourselves. It seems like that is playing out.
We think it’s healthy for people in the effective altruism community to be thinking “What are some concrete things I could make happen / people I could recruit to do something important?,” relative to more abstract and less actionable questions about what EA should door meta-level discussion. Our hope is that the regranting program helps focus people on these questions.
We are getting encouraging feedback that the program is helping empower some regrantors and grantees to be more ambitious and action-oriented. Some example quotes from regrantors:
“[...] I know our first grant brought one person from the startup world to creating a non-profit in the effective altruism space, which has been hugely motivating for them. Personally, it’s changed my way of thinking of how I take part in the world in a pretty big way. I am much less complacent about things, and if I think there’d be something really great but nobody’s doing it, I now have an attitude of “how can I make that happen?”.”
“I think the most value of me being a regranter has been starting to look at the world and others’ work with more agency: encouraging people to start new projects (even if then a different funder is a better fit), nudge people to be more ambitious/change careers, come up with new ideas myself and find a home for them etc.”
“It makes it much easier to be agentic and overcome learned helplessness”
We usually approve regrants within 72hours. One regrantor said: “Very fast response times, mix of “we want to help you move fast when warranted, but also show caution when appropriate.” this is a hard balance but i’ve been personally happy with how the mix has been done.”
An example of where fast processes mattered: One of our regrantors made the difference between a talented law student taking a job at a law firm and going into biosecurity policy work. This regrantor had come across a paper by Berkeley Law student Braden Leach on the Biological Weapons Convention, found it promising, and reached out to Braden to fund him to work on biosecurity policy full time. The regrantor recommended a grant to him and it was approved within 24 hours, allowing Braden to make a decision to accept the position instead of another timely pending offer at a law firm.
Perhaps unsurprisingly, regrantors have varied massively in terms of how much regranting they do. >50% of regrantors have made no grants so far, while some extremely active regrantors have used up their entire discretionary budgets, made good grants, and gotten their regranting budgets re-upped.
We like that the regranting program allows us to cover for our own blind spots, and encourages a wider range of viewpoints to be represented among funders.
Going forward
We are going to continue with this experiment until October and then more systematically review the process and the quality of the grantmaking. We may also have a more developed sense at that point of how some of the new projects are going. Our current guess is that this program should probably continue in some form.
Open call in more detail
Background
Our open call for applications was launched on February 28, 2022. We gave people three weeks to submit applications, and we received over 1700 applications.
As explained above, the basic idea of the open call was, “Let’s tell people what we’re trying to do, what kinds of things we might be interested in funding, give them a lot of examples of projects they could launch, have an easy and fast application process, and then get the word out with Twitter blitz.” We wrote some about the review process here.
We funded 69 applications, totaling $27M. Some stats on acceptance rate:
Of our over 1700 applications, we funded 4%. (Applications squarely focused on our areas of interest were much more likely to get funded.)
43% were rated as having some plausibility for funding (≥4/10) by our first reviewers, and of those we funded 9%.
20% were rated as warranting fairly close consideration for funding (≥6/10) by our first reviewers, and of those we funded 18%.
Some example grants
$1.5M to Lionel Levine to Cornell University to support Prof. Levine, as well as students and collaborators, to work on alignment theory research at the Cornell math department.
$1.2M for SecureBio (described above)
$1M to Non-trivial Pursuits to launch an outreach project to help students to learn about career options, develop their skills, and plan their careers to work on the world’s most pressing problems.
$250,000 to launch Apollo Academic Surveys (described above)
$300,000 to Ray Amjad (described above)
Key stats
Areas of interest
Area | Count | Volume |
69 | $27M | |
Artificial Intelligence | 15 | $5M |
Biorisk and Recovery from Catastrophe | 16 | $12M |
Economic Growth | 1 | <$1M |
Effective Altruism | 10 | $3M |
Empowering Exceptional People | 5 | $2M |
Epistemic Institutions | 9 | $3M |
Great Power Relations | 3 | $1M |
Other | 4 | $1M |
Research That Can Help Us Improve | 1 | <$1M |
Space Governance | 2 | <$1M |
Values and Reflective Processes | 3 | <$1M |
Grant size
Count | Volume | |
69 | $27M | |
<$50k | 6 | <$1M |
$50k - $500k | 49 | $12M |
>=$500k | 14 | $15M |
Expectations vs. reality
Some outcomes we were interested in, and thoughts on how they went:
Getting sympathetic founders from adjacent networks to launch new projects related to our areas of interest
Worse than expected. We thought that maybe there was a range of people who aren’t on our radar yet (e.g., tech founder types who have read The Precipice) who would be interested in launching projects in our areas of interest if we had accessible explanations of what we were hoping for, distributed the call widely, and made the funding process easy. But we didn’t really get much of that. Instead, most of the applications we were interested in came from people who were already working in our areas of interest and/or from the effective altruism community. So this part of the experiment performed below our expectations.
Getting people from the effective altruism community to submit ambitious proposals that we wouldn’t otherwise have considered funding
Somewhat better than expected. An outcome that would have been at or slightly below expectations would be one where the applications we received were highly duplicative with the applications received by other effective altruism funders (e.g. EA Funds). However, we also received some new, interesting, and large proposals, for example from Ray Amjad, Sage, Global Guessing, Nathan Young, Manifold Markets, and Kevin Esvelt (including two pending applications we’re excited about where we’re waiting for further details).
Getting people to launch massively scalable projects
Worse than expected. Our encouragement toward massively scalable projects did not seem to have the intended effect. We got some very large requests for funding, sometimes tens or even hundreds of millions of dollars. We appreciate the boldness and ambition. However, we are much more interested in funding projects that start out no larger than they need to be, but can scale massively (without too great a fall in cost-effectiveness) once they show sufficient signs of traction. In short, this got us massive project applications but not really the massively scalable project applications we were hoping for. It seems that there is continued energy toward brainstorming projects of this on the EA Forum. And we are excited that the Atlas Fellowship has been founded as one project meeting this description in a clear way. We’d love to see more in this area!
Getting people to launch projects from our project ideas lists
About as expected (going OK). We funded a number of applications that were closely related to our project ideas lists. We think our project idea list played a major role in shaping the project in the cases of: Apollo Academic Surveys (expert polling for everything); Forecasting Our World In Data (Good Judgment Inc); and a couple of other cases.
Introducing us to projects related to our project ideas and areas of interest that we otherwise may not have considered funding
As expected in biosecurity, worse in other areas. Some biosecurity grants meeting this description include grants for better PPE (Michael Robkin, Greg Liu/Virginia Tech), pathogen sterilization (Justin Mares, Dr. Emilio Alarcon/University of Ottawa), and strengthening the BWC (Michael Jabob/MITRE, Council on Strategic Risks). In AI alignment, a notable grant in this category was an application from Lionel Levine to focus on AI alignment research
Doing all of the above without it taking a ton of time.
Worse than expected. We think it took us about twice as long as we were hoping for, and the impact per unit time was lower than other programs we’ve experimented in.
Some other reflections:
We made decisions on a large majority of applications (and sent an update to nearly everyone else) within roughly two weeks. We needed more time (in some cases considerably more time) on about 12% of applications that were more complex, submitted close to the deadline, in fields with which we are less familiar, or for larger sums of money.
A large majority of applications were not very focused on our goals or areas of interest, and could be rejected quickly. Some of the most obvious yeses could be approved very quickly. We ended up spending a lot of our time debating the more borderline cases.
We found the open call pretty time-consuming and distracting due to the large volume of applications, our statement that we’d aim to arrive at decisions on most proposals within 14 days, and the considerable amount of time that we and our advisors spent on assessing and deliberating about borderline applications.
One of our most common complaints about applications we received was that they were too vague or not action-oriented enough. We find first drafts and prototypes of the products grantseekers hope to create much more interesting than abstract descriptions of things that they hope to do.
Some of our areas of interest (e.g. space governance) were very exploratory, and we wondered whether we’d get interesting proposals if we put out food for thought. We got less of this than we were hoping for.
Many applications seemed relatively shoehorned into our areas of interest.
We would be excited to see people from the effective altruism community propose work that is less meta and more object-level.
Going forward
If we were doing this again, there are a number of changes we would consider, including:
Setting ourselves up to spend less time on borderline applications.
Changing our application forms to invite applicants to write more concretely about what they’ll actually be doing, and how they think it will increase the odds that humanity survives and/or flourishes thousands of years from now.
Changing our application forms to elicit more information about applicants’ existing funding situation and pending applications to other funders.
Setting up a review process that relies more on external contractors or regrantors and less on the Future Fund team.
Overall the project went somewhat worse than expected, but we still think the ROI on both team time and capital was reasonable, and we’re excited about some of the new projects we funded.
We currently don’t plan to run another open call in the next few months. We will revisit this when we have more capacity and see if we can find a way to run it more efficiently.
Staff-led grantmaking in more detail
Background
Unlike the open call and regranting, these grants and investments are not a test of a particular potentially highly scalable funding model. These are projects we funded because we became aware of them and thought they were good ideas.
Some example grants
We made 25 grants in this category, totaling ~$73M.
Five of our largest grants/investments were:
$15M to Longview Philanthropy to support Longview’s independent grantmaking on global priorities research and other longtermist issues.
~$14M to the Centre for Effective Altruism for general support for their activities, including running conferences, supporting student groups, and maintaining online resources.
$10M to HelixNano for an investment to support preclinical and Phase 1 trials of a pan-variant Covid-19 vaccine.
$5M to Atlas Fellowship to support scholarships for talented and promising high school students to use towards educational opportunities and enrolling in a summer program.
$2M to Lightcone Infrastructure Team to support Lightcone’s ongoing projects including running the LessWrong forum, hosting conferences and events, and maintaining an office space for Effective Altruist organizations.
Key stats
Areas of interest
Area | Count | Volume |
25 | $73M | |
Artificial Intelligence | 1 | $5M |
Biorisk and Recovery from Catastrophe | 3 | $18M |
Economic Growth | 3 | $2M |
Effective Altruism | 10 | $24M |
Empowering Exceptional People | 3 | $5M |
Epistemic Institutions | 0 | $0M |
Great Power Relations | 0 | $0M |
Other | 1 | $15M |
Research That Can Help Us Improve | 1 | $1M |
Space Governance | 0 | $0M |
Values and Reflective Processes | 3 | $2M |
Grant size
Count | Volume | |
25 | $73M | |
<$50k | 3 | <$1M |
$50k - $500k | 6 | $2M |
>=$500k | 16 | $71M |
Expectations and reflections
Coming in, our expectation was that there would be some low-hanging fruit to pick here, the grants would be pretty good, the best ones would largely get funded anyway, the funding stream wouldn’t be massively scalable, and that the return on time from these grants would be pretty good. Our experience has generally been pretty consistent with that. (This is unsurprising because it’s continuous with things that Nick had a lot of experience with at Open Philanthropy.)
Probably the most distinctive grant from this set is our grant to Longview Philanthropy, which is using the funds for its grantmaking in global priorities research, nuclear weapons policy, and other areas (this is another experiment with regranting, in this case regranting via a grantmaking organization rather than via individuals as in our main regranting program). We’re interested to see how the experiment goes!
Going forward
We’ll continue with staff-led grantmaking in the background, but most of our focus will be on testing new funding models.
Conclusion
There’s much in the above update we find exciting, including:
Supporting a wide range of projects across our areas of interest
Fast experimentation on funding models that may be massively scalable
Attempts to nudge the effective altruism community to be more action-oriented, ambitious, and object-level
Building our team
We feel like we’re learning a lot from the process, and we are also looking forward to seeing what else we learn as we test prizes and try new approaches to proactively launching new projects.
Finally: thank you to everybody who applied to us or otherwise engaged with one of our programs! We’re grateful for your work to help humanity flourish. We also deeply appreciate the help we are getting from other folks at FTX, colleagues at Open Philanthropy, expert reviewers, our regrantors, and other collaborators, whom we rely on extensively. It is a privilege to work with all of you!
Thanks for the detailed update!
There was one expectation / takeaway that I was surprised about.
You mentioned the call was open for three weeks. Would that have been sufficient for people who are not already deeply embedded in EA networks to formulate a coherent and fundable idea (especially if they currently have full-time jobs)? It seems likely that this kind of “get people to launch new projects” effect would require more runway. If so, the data from this round shouldn’t update one’s priors very much on this question.
I agree, I found out about FTX Future Fund in mid-April despite being connected to EA, being connected to the crypto community (following SBF on Twitter) and myself working on an application for the 776 Fellowship. I finally found out about it because Alexis Ohanian randomly retweeted a Future Fund tweet.
Clearly, they still had almost 2k applications, but I don’t think it was an easy find for anyone not actively looking. I would have never found it if I was busy actively working on a project, because I don’t check most of the channels this was advertised on.
We appreciate you! ❤️
Since it seems like a major goal is of the Future Fund is to experiment and gain information on types of philanthropy —how much data collection and causal inference are you doing/plan to do on the grant evaluations?
Here are some ideas I quickly came up with that might be interesting.
If you decided whether to fund marginal projects by votes or some scoring system—you could later assess what you think is the impact of funding projects by using a regression-discontinuity-design.
You mentioned that there is some randomness in who you used as re-granters. This has some similarities to the random assignment of judges that is frequently used in applied econ. You could use this to infer if certain features of grantmakers cause better grants (e.g. some grant-makers might tend to believe in providing higher amounts of funding, so you could assess if this more gung-ho attitude leads to better grants, etc.)
Explicitly introduce some randomness on if you approve a grant or not.
In all these cases, you’d need to ex-post assess grant applications a few years later, including the ones you didn’t fund on impact. Then these above strategies would let you assess the causal impact of your grants.
Thank you for such a detailed and transparent post! It’s really exciting to see experimentation in funding models as Future Fund enters the ecosystem. (It’s also great to see a bunch of promising things getting the resources they need!)
I’ve found that the project ideas, areas of interest and grants/regrants databases are also especially useful resources in helping people to think about how they might best contribute! I’ve shared these multiple times when speaking with very promising people who are relatively cause neutral and just want to do as much good as they can given their specific skills & context.
Thanks for sharing the database links Luke! I wasn’t aware FTX had that, but it definitely makes sense that they do.
“Our sense is we’re able to generate >2x more value per time with our other activities [than with open calls]”: does this number include an estimate of the time spent by regrantors? (Even if it doesn’t, the 2x figure is still interesting.)
Either way it looks pretty hard to have a real apples-to-apples comparison, since presumably the open call takes significantly more time from prospective grantees (but you wouldn’t want to count that the same as grantmaker time).
Very exciting!
Thanks for taking the time to write this up!
Is there a way to access a list of regrantors, maybe indexed by problem area? Any reason I can’t just query “show me the email address of every FTX regrantor who is interested in epistemic institutions” for instance?
My guesses:
Regranting is intended as a way to let people with local knowledge apply it to directing funds. This is different from just deputizing grantmakers.
If you made the list public I’d expect the regranters to be overwhelmed by people seeking grants, and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)
A public list of regranters makes the system very gameable and vulnerable to individual granters unilaterally funding negative value projects.
This explanation makes sense to me, but I wonder if there is better middle ground where regrantors benefit from a degree of publicity.
This comes from personal experience. I received an FTX regrant for upskilling in technical AI safety research, as did several other students in similar positions as me. I did not know my regrantor personally, but rather messaged them on the EA Forum and hopped on a call to discuss careers in AI safety. They saw that I fit the profile of “potential AI safety technical researcher” and very quickly funded me without an extended vetting process. I would not have received my grant if (a) I didn’t often message people on the EA Forum or (b) I didn’t get on a call with a stranger without a clear goal in mind, both of which seem like poor screening criteria.
Perhaps it was an effective screen for “entrepreneurial” candidates, but I expect that an EA Forum post requesting applications could have produced several more grants of similar quality without overwhelming my regrantor. Regranting via personal connections reduces the pool of potential grantees to people who have thoroughly networked themselves within EA, which privileges paths like “move to the Bay” at the expense of paths like “go to your cheap state school with no EA group and study hard”. It’s a difficult line to walk and I’m not a grantmaker, but I think more public access might improve both the equity and quality of FTX regrants.
Edited to add: Given LTFF’s history of funding similar people and the drawbacks of regrantor publicity, FTX’s anonymity policy does seem reasonable to me. Appreciate the pushback.
People in that position (or know people who are): Please consider applying to the Long-term Future Fund*. LTFF is excited to receive upskilling applications from people who are potentially great at technical AI safety research and/or other longtermist priority areas, and they have more institutional capacity (including a network of advisors) to evaluate such proposals across the board than many regrantors individually have.
* For newer onlookers, please note that LTFF is under EA Funds and is not directly affiliated with the FTX Future Fund, despite the (perhaps confusingly) similar names.
Besides the huge downsides Ryan mentioned (imagine someone reading your whole blog to better craft the perfect adversarially fundable project), publicity would have some toxic effects for the regrantor.
For instance, all new social interactions would have an ulterior interpretation (“they’re sucking up for cash”). In a personal/professional soup like EA that could be maddening. One former grantmaker told me that the degree of sucking-up they got was part of why they moved on. I’m unusually sensitive to such things; I would probably decline to be a public grantmaker.
Privacy also has risks (nepotism, the excess zero-sum social investment in the bloody Bay you mention, insufficient accountability), but those seem smaller to me. But private regrantors were previously balanced out by the open call channel, so it’d be good to hear from FF about how they intend to seek new or peripheral applicants.
Software makes compromise pretty easy though. I quite like the idea of a regrantor publishing an anon post explaining what they’re looking for, with a form attached.
I share this fear but I don’t know if this is clearly stronger than other dynamics in EA when one party has something the other wants (e.g. prestige, network, advice, employment).
Also don’t know but I guess worse here, since it’s your explicit job to listen to applicants, where the usual requests for introductions and attention are rarely part of anyone’s job description.
I guess it’s not realistic to litmus test individuals about their cold-emailing practices and their seriousness about the problem area they claim to be working in, before giving them access to the list.
I would expect the cold emailing advice given by Y Combinator to result in emails that do not frustrate regrantors.
I love the transparency of this post!
Also, I particularly like how regranting utilizes the value of local knowledge.
“regrantors and grant recommenders could exploit local knowledge and diverse networks to make promising projects move forward that we might not have known about or had time to investigate ourselves.”
Hi! Thank you for the detailed update, it is very helpful. Quick question, if an application was submitted to the Open Call, with confirmation and there has not been any communication at this point, has the application been denied? Thank you for any further clarification you can give.
If they haven’t responded yet, they lost it, or they responded but it get caught in your spam filters. You should definitely re-email, it’s been months since they gave decisions.
Good Advice, I have been checking the spam, and the confirmation didn’t go there when it was sent originally but I have been checking just in case. The difficult part is there is no way to re-email, the submission was done via a google form that has a no reply email… We don’t expect feedback but would like to close the chapter on the submission so to speak. I have been reading the comments and assumed rejections were sent to others, so was wondering where ours may be. Thank you for the advice all the same.
Hi Kris—I’ve sent you a DM to figure out what’s going on.