We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord’s table of risks in The Precipice. We don’t have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that’s something we may do in the future.
Considerations about how quickly it seems reasonable to scale a grantee’s budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.
When I say that I’m looking for feedback about grants that were a significant mistake, I’m primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.
We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.
There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord’s table of risks in The Precipice. We don’t have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that’s something we may do in the future.
Considerations about how quickly it seems reasonable to scale a grantee’s budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.
When I say that I’m looking for feedback about grants that were a significant mistake, I’m primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.
Do you have standard numbers for net x-risk reduction (share or absolute) for classes of interventions you fund, too?