Do you (FTX grantmakers) do or reference a BOTEC for each grant? Would you publish BOTECs you make or comments on the BOTECs you reference?
Without this, it seems like EAs would often need to guess and reconstruct your reasoning or make their own models in order to critique a grant, which is much more costly for individuals to do, is much less likely to happen at all, and risks strawmanning or low quality critiques. I think this also gets at the heart of two concerns with “free-spending” EA: we don’t know what impact the EA community is buying with some of our spending, and we don’t have clear arguments to point to to justify particular possibly suspicious expenses to others.
We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord’s table of risks in The Precipice. We don’t have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that’s something we may do in the future.
Considerations about how quickly it seems reasonable to scale a grantee’s budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.
When I say that I’m looking for feedback about grants that were a significant mistake, I’m primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.
I did a lot of structured BOTECs for a different grant-making organization, but decided against sharing them with applicants in the feedback. The main problems were that one of the key inputs was a ‘how competent are the applicants at executing on this’, which felt awkward to share if someone got a very low number, and that the overall scores were approximately log-normally distributed, so almost everyone would have ended up looking pretty bad after normalization.
I think that part of the model could be left out (left as a variable, or factored out of the BOTEC if possible), or only published for successful applicants.
Do you (FTX grantmakers) do or reference a BOTEC for each grant? Would you publish BOTECs you make or comments on the BOTECs you reference?
Without this, it seems like EAs would often need to guess and reconstruct your reasoning or make their own models in order to critique a grant, which is much more costly for individuals to do, is much less likely to happen at all, and risks strawmanning or low quality critiques. I think this also gets at the heart of two concerns with “free-spending” EA: we don’t know what impact the EA community is buying with some of our spending, and we don’t have clear arguments to point to to justify particular possibly suspicious expenses to others.
We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord’s table of risks in The Precipice. We don’t have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that’s something we may do in the future.
Considerations about how quickly it seems reasonable to scale a grantee’s budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.
When I say that I’m looking for feedback about grants that were a significant mistake, I’m primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.
Do you have standard numbers for net x-risk reduction (share or absolute) for classes of interventions you fund, too?
I did a lot of structured BOTECs for a different grant-making organization, but decided against sharing them with applicants in the feedback. The main problems were that one of the key inputs was a ‘how competent are the applicants at executing on this’, which felt awkward to share if someone got a very low number, and that the overall scores were approximately log-normally distributed, so almost everyone would have ended up looking pretty bad after normalization.
I think that part of the model could be left out (left as a variable, or factored out of the BOTEC if possible), or only published for successful applicants.