Open Phil is seeking applications from grantees impacted by recent events
We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.
We’re open to applications from:
Grantees who never received some or all of their committed funds from FTXFF.
Grantees who received funds, but want to set them aside to return to creditors or depositors.
We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.
Grantees whose funding was otherwise affected by recent events.[1]
Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.
We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.
We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k.
- ^
We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)
- My takes on the FTX situation will (mostly) be cold, not hot by 18 Nov 2022 23:57 UTC; 389 points) (
- We’re no longer “pausing most new longtermist funding commitments” by 30 Jan 2023 19:29 UTC; 201 points) (
- Announcing Nonlinear Emergency Funding by 13 Nov 2022 18:02 UTC; 192 points) (
- The EA Infrastructure Fund seems to have paused its grantmaking and approved grant payments. Why? by 6 Dec 2022 11:38 UTC; 116 points) (
- New roles on my team: come build Open Phil’s technical AI safety program with me! by 19 Oct 2023 16:46 UTC; 102 points) (
- Our Progress in 2022 and Plans for 2023 by 12 May 2023 3:06 UTC; 90 points) (
- New roles on my team: come build Open Phil’s technical AI safety program with me! by 19 Oct 2023 16:47 UTC; 83 points) (LessWrong;
- A job matching service for affected FTXFF grantees by 21 Nov 2022 11:19 UTC; 71 points) (
- Some data on the stock of EA™ funding by 20 Nov 2022 12:45 UTC; 62 points) (
- 26 Sep 2023 2:54 UTC; 59 points) 's comment on There should be more AI safety orgs by (
- 5 Apr 2023 22:27 UTC; 54 points) 's comment on Critiques of prominent AI safety labs: Redwood Research by (
- Announcing Nonlinear Emergency Funding by 13 Nov 2022 19:02 UTC; 54 points) (LessWrong;
- How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund? by 1 Dec 2022 12:56 UTC; 41 points) (
- How should small and medium-sized donors step in to fill gaps left by the collapse of the FTX Future Fund? by 1 Dec 2022 12:56 UTC; 41 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 38 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 22:59 UTC; 22 points) (
- EA & LW Forums Weekly Summary (14th Nov − 27th Nov 22′) by 29 Nov 2022 23:00 UTC; 21 points) (LessWrong;
- 21 Oct 2023 18:37 UTC; 19 points) 's comment on New roles on my team: come build Open Phil’s technical AI safety program with me! by (
- EA Organization Updates: December 2022 by 19 Dec 2022 18:20 UTC; 19 points) (
- EA & LW Forums Weekly Summary (7th Nov − 13th Nov 22′) by 16 Nov 2022 3:04 UTC; 19 points) (LessWrong;
- 31 Jan 2023 14:07 UTC; 14 points) 's comment on We’re no longer “pausing most new longtermist funding commitments” by (
- Crowdfunding to Rescue FTX Grants by 15 Nov 2022 17:55 UTC; 7 points) (
- 31 Jan 2023 18:43 UTC; 4 points) 's comment on We’re no longer “pausing most new longtermist funding commitments” by (
- 26 Nov 2022 10:44 UTC; 4 points) 's comment on Where are you donating this year, and why? (Open thread) by (
- 9 Dec 2022 23:56 UTC; 2 points) 's comment on List of EA funding opportunities by (
Really excited to see this initiative!
Should grantees with significant runway apply for this, (ie, they lost out on money, but this mostly cut into future runway, and this won’t really affect things for the next few months), or would you like to reserve this for grantees with urgent need?
Also, has OpenPhil considered guaranteeing some/all grantees for covering clawbacks? This seems like it might be reasonably cheap in expectation, but save a lot of people from stressful distractions and unnecessary conservatism (but I imagine also has a bunch of legal consequences and possibly significantly increases the risk of clawbacks!)
(I work at Open Phil assisting with this effort.)
Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.
For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.
On the second paragraph, I don’t think it has been established that insuring clawback risk would be cheap in expectation. Also, in many cases, insurance increases the risk of a lawsuit or reduces the other side’s willingness to settle.
For example, suppose I think someone negligently breaks my leg and I am considering whether it is worthwhile to litigate. If I find out the person is an uninsured second-year philosophy grad student, I probably won’t bother—collecting any judgment will be very difficult, and a rational judgment debtor will just file for bankruptcy and get the debt wiped anyway. If the person is a middle-class tourist from the UK, I will probably find it worthwhile to sue if the damages are big enough and if I think there is a good enough chance I could collect on any judgment. Now, if I know that either of these people were insured, I am much more likely to sue, and I am not going to be willing to reduce a settlement demand based on doubts about collectability.
However, I think there is a good idea here. I would suggest there is a legal clawback risk and a “moral clawback risk”—that the grantee will be (and/or feel) ethically obliged to return some or all money although not legally required. Replacing some or all of the FTX-aligned funds seems to weigh the combined legal + moral clawback risk at 100%. Although offering insurance has some downsides, it does seem less expensive than unconditionally replacing the funds.
So there may be some conditions under which a funder would find it wortwhile to provide clawback insurance while it would not fund the project.
That sounds relatively straightforward as it comes to legal clawback risk; it is interesting to consider how “insurance” might work if the “insurer” were willing to provide some coverage for moral clawback risk. For those with certain beliefs about moral clawback, the grantee’s moral-clawback obligation or non-obligation can already be determined, so an “insurance” paradigm makes no sense. But for some of us, the nature and extent of a moral-clawback obligation depends on information that is not yet known.
Would the “insurer” decide the extent to which the ultimately determined facts create an ethical obligation to return funds? Or perhaps the “insurer” would offer 100% coverage for legal clawback risk and would allow the “insured” to decide moral clawback subject to a coinsurance requirement. For example, the “insurer” might only match the amount the “insured” was willing to voluntarily return out of its own pocket. That would address concerns that grantees may be too quick to find a moral-clawback obligation if the cost is coming entirely out of someone else’s pocket, and reduce the cost of providing “insurance.”
If a grant / grantee is doing work which aligns with Open Phil’s work, but is more properly classified as global health or animal welfare, can they still apply here, should they apply in some other way, or is Open Phil not the correct vehicle?
(I work at Open Phil on Effective Altruism Community Building: Global Health and Wellbeing)
Our understanding is that only a small proportion of FTXFF’s grantees would be properly classified as global health or animal welfare. Among that subset, there are some grantees who we think might be a good fit for our current focus areas and strategies. We’ve reached out individually to grantees we know of who fit that description
That being said, it’s possible we’ve missed potential grantees, or work that might contribute across multiple cause areas. If you think that might apply to your project, you can apply through the same form.
This is wonderful news. Thank you very much for getting that up and running.
You may want to also consider the situation where an organisation doesn’t want to pay employees with funds that could potentially be clawed back or which could be seen to be morally-tainted (depending on what information we find out).
(I work at Open Phil assisting with this effort.)
We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.
I strongly suspect there are legal reasons that covering future clawbacks, especially if they say so explicitly, is not going to be workable, or at least is significantly more complex / dangerous legally.
This sort of falls under the second category, “Grantees who received funds, but want to set them aside to return to creditors or depositors.” At least that’s how I read it, though the more I think about it the more this category is kind of confusing and your wording seems more direct.
I think it’d be preferable to explicitly list as a reason for applying something along the lines of “Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks”.
Less importantly, it’d possibly be better to make it separate from “to return to creditors or depositors”.
How quickly should grantees impacted by recent events apply to this call? Is there a hard or soft deadline for these applications? I have to decide how much time I should invest in adapting, updating, and improving the previous application. I assume you want applicants to attach a proposal detailing the planned projects, the project’s pathway to impact, and evidence of its chances to succeed.
The application form is actually really restrictive once you open it—when I filled it out, it explicitly instructed not to write any new material and only attach old material that was sent to FTXFF, and only had a <20 word box and <150 word box for grant descriptions. Today when I open the form even those boxes have disappeared. I think it’s meant to be a quite quick form, where they’ll reach out for more details later.
Thank you so much for pointing that out, Vael! I had completely overlooked that information. That’s really helpful to know.
Looking back five months later, can you say anything about whether this program ended up making grants, and if so how much/how many? Thanks!
I have some clarification questions about the form:
1. Does “total grant amount” refer to the amount we requested or the amount we were promised?
2. Does “amount that has been committed but not received yet” refer to a) the amount that the grantor promised but did not pay out or b) project-related financial obligations and expenditures of the grantee, such as the salaries of people working on the project, that would have been paid from the grant?
This refers to the amount you were promised from FTXF.
This refers to the amount that was promised, but hasn’t been paid out.
Thank you very much for answering my questions. :)
I recently filled out the Airtable form, but was surprised to see when I got my e-mail receipt that many of the answers I provided did not appear.
How would you suggest that I and others affected by this proceed? Thanks!
[edit: extraneous information removed]
(I work at Open Phil assisting with this effort.)
Thanks for pointing this out; it looks like there was a technical error which excluded these from the email receipt, which we’ve now fixed. The information was still received on our end, so you don’t need to take any extra actions.