I’d guess that quite-well-directed marginal funding can buy a basis point for something like $50M (for example, I’d expect to be able to buy a basis point by putting that money toward a combination of alignment research, AI governance research, and meta-stuff like movement-building around AI). Accounting for not all longtermist funding being so well-directed gives something like $100M of longtermism funding per basis point, or substantially more if we’re talking about all-of-EA funding (insofar as non-longtermism funding buys quite little X-risk reduction).
But on reflection, I think that’s too high. I arrived at $50M by asking myself “what would I feel pretty comfortable saying could buy a basis point.” Considering a reversal test, I would absolutely not take the marginal $100M out of [OpenPhil’s longtermism budget / longtermist organizations] to buy one basis point. Reframing as “what amount would I not feel great trading for a basis point in either direction,” I instinctively go down to more like $25M of quite-well-directed funding or $50M of real-world longtermism funding. EA could afford to lose $50M of longtermism funding, but it would hurt. The Long-Term Future Fund has spent less than $6M in its history, for example. Unlike Linch, I would be quite sad about trading $100M for a single measly basis point — $100M (granted reasonably well) would make a bigger difference, I think.
I suspect others may initially come up with estimates too high due to similarly framing the question as “what would I feel pretty comfortable saying could buy a basis point,” like I originally did. If your answer is $X, I encourage you to make sure that you would take away $X of longtermist funding from the world to buy a single basis point.
To be clear, I also share the intuition that I feel a lot better about taking $s from Open Phil’s coffers than I do taking money from existing LT organizations, which probably is indicative of something.
I’d guess that quite-well-directed marginal funding can buy a basis point for something like $50M (for example, I’d expect to be able to buy a basis point by putting that money toward a combination of alignment research, AI governance research, and meta-stuff like movement-building around AI). Accounting for not all longtermist funding being so well-directed gives something like $100M of longtermism funding per basis point, or substantially more if we’re talking about all-of-EA funding (insofar as non-longtermism funding buys quite little X-risk reduction).
But on reflection, I think that’s too high. I arrived at $50M by asking myself “what would I feel pretty comfortable saying could buy a basis point.” Considering a reversal test, I would absolutely not take the marginal $100M out of [OpenPhil’s longtermism budget / longtermist organizations] to buy one basis point. Reframing as “what amount would I not feel great trading for a basis point in either direction,” I instinctively go down to more like $25M of quite-well-directed funding or $50M of real-world longtermism funding. EA could afford to lose $50M of longtermism funding, but it would hurt. The Long-Term Future Fund has spent less than $6M in its history, for example. Unlike Linch, I would be quite sad about trading $100M for a single measly basis point — $100M (granted reasonably well) would make a bigger difference, I think.
I suspect others may initially come up with estimates too high due to similarly framing the question as “what would I feel pretty comfortable saying could buy a basis point,” like I originally did. If your answer is $X, I encourage you to make sure that you would take away $X of longtermist funding from the world to buy a single basis point.
Although, one thing to flag is that a lot of the resources in LT organizations is human capital.
Thanks for your thoughts.
To be clear, I also share the intuition that I feel a lot better about taking $s from Open Phil’s coffers than I do taking money from existing LT organizations, which probably is indicative of something.