No, they are unconditional.
Nick_Beckstead
Thanks for the feedback! This is an experiment, and if it goes well we might do more things like it in the future. For now, we thought it was best to start with something that we felt we could communicate and judge relatively cleanly.
Maybe you could talk about betting odds as if you’re an observer outside this world or otherwise assume away (causal and acausal) influence other than through the payout.
Yes, the intention is roughly something like this.
Announcing the Future Fund’s AI Worldview Prize
Hi Seena! We’ll post about it on our website if/when we do another open call. We’ll also announce it on Twitter: https://twitter.com/ftxfuturefund
Please see our grants page: https://ftxfuturefund.org/our-grants/
Future Fund June 2022 Update
Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins
Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making. I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.
On (1), I agree with Sam that it wouldn’t be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could be done if need be, but it would be really hard and require an innovative approach. So I don’t think we should be doing this now, but I’m not saying that we won’t try to find ways to give more feedback in the future (see below).
On (2), although we want to effectively allocate at least $100M this year, we don’t plan to do 100% of this using this particular process without growing our team. In our announcement post, we said we would try four different processes and see what works best. We could continue all, some, or none of them. We have given out considerably less than $100M via the open call (more in our progress update in a month or so); and, as I mentioned in another comment, for larger and/or more complex grants the investigation process often takes longer than two weeks.
On hiring someone to do this: I think there are good reasons for us not to hire an extra person whose job is to give feedback to everyone. Most importantly: there are lots of things we could hire for, I take early hiring decisions very seriously because they affect the culture and long-term trajectory of the organization, and we want to take those decisions slowly and deliberately. I also think it’s important to maintain a certain quality bar for this kind of feedback, and this would likely require significant oversight from the existing team.
Will we provide feedback to rejected applicants in the future? Possibly, but I think this involves complex tradeoffs and isn’t a no-brainer. I’ll try to explain some of the reasons I see it this way, even at scale. A simple and unfortunate reason is that there are a lot of opportunities for angry rejected applicants—most of whom we do not know at all and aren’t part of the effective altruism community—to play “gotcha” on Twitter (or with lawsuit threats) in response to badly worded feedback, and even if the chances of this happening are small for any single rejected application, the cumulative chances of this happening once are substantial if you’re giving feedback to thousands of people. (I think this may be why even many public-spirited employers and major funders don’t provide such feedback.) I could imagine a semi-standardized process that gave more feedback to people who wanted it and very nearly got funded. (A model that I heard TripleByte used sounds interesting to me.) We’ll have to revisit these questions the next time we have an open call, and we’ll take the conversation here into account—we really appreciate your feedback!
We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord’s table of risks in The Precipice. We don’t have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that’s something we may do in the future.
Considerations about how quickly it seems reasonable to scale a grantee’s budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.
When I say that I’m looking for feedback about grants that were a significant mistake, I’m primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.
Thanks, glad to hear it!
About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can’t unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.
We will be reporting on the open call more systematically in our progress update which we publish in a month or so.
Thanks for the thoughts, Irena! It’s true that there are some proposals that did not receive decisions in 14 days and perhaps we should have communicated more carefully.
That said, I think if you look at the text on the website and compare it with what’s happening, it actually matches pretty closely.
We wrote:
“We aim to arrive at decisions on most proposals within 14 days (though in more complex cases, we might need more time).
If your grant request is under $1 million, we understand it, we like it, and we don’t see potential for major downsides, it’ll probably get approved within a week.
Sometimes, we won’t see an easy path to finding a strong fit, and you’ll get a quick negative decision.
Sometimes we’re just missing a little bit of information, and we’ll need to have a call with you to see if there’s a fit.
Larger grants and grants that affect whole communities require more attention, and will have a customized process.
We try to avoid processes that take months and leave grantees unclear on when they’re going to reach a decision.”
It’s true that we made decisions on the vast majority of proposals on roughly this timeline, and then some of the more complicated / expensive proposals took more time (and got indications from us about when they were supposed to hear back next).
- May 11, 2022, 7:08 PM; 27 points) 's comment on Some clarifications on the Future Fund’s approach to grantmaking by (
Thanks for sharing your thoughts and concerns, Tee. I’d like to comment on application feedback in particular. It’s true that we are not providing feedback on the vast majority of applications, and I can see how it would be frustrating and confusing to be rejected without understanding the reasons, especially when funders have such large resources at their disposal.
We decided not to give feedback on applications because we didn’t see how to do it well and stay focused on our current commitments and priorities. We think it would require a large time investment to give feedback to everyone who wanted it on the 1700 applications we received, and we wanted to keep our focus on making our process timely, staying on top of our regranting program, dealing with other outstanding grants outside of these programs, hiring, getting started on reviewing our progress to date, and moving on to future priorities. I don’t want to say it’s impossible to find a process to give high-quality feedback at scale that we could do at acceptable time costs right now, but I do want to say that it would not be easy and would require an innovative approach. I hope that helps explain why we chose to prioritize as we did.
Fixed, thanks!
Makes sense! We are aiming to post a progress update in the next month or so.
- May 10, 2022, 10:43 AM; 20 points) 's comment on Some clarifications on the Future Fund’s approach to grantmaking by (
Some clarifications on the Future Fund’s approach to grantmaking
We’re still finishing up about 30 more complicated applications (of ~1700 originally submitted). Then we’re going to review the process, and share some of what we learned!
We don’t know yet! We’re finishing up about 30 more complicated applications (of ~1700 originally submitted), and then we’re going to review the process and make a decision about this.
We are very unsure on both counts! There are some Manifold Markets on the first question, though!
I do think articles wouldn’t necessarily need to be that long to be convincing to us, and this may be a consequence of Open Philanthropy’s thoroughness. Part of our hope for these prizes is that we’ll get a wider range of people weighing in on these debates (and I’d expect less length there).