I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a billion, and over centralisation seems like one of the bigger problems.
I find it plausible that a strong fix to the funder-diversity problem could increase the value of the talent pool by 10% or even more. However, having a new independent funder with $1B in assets (spending much less than that per year) feels more like an incremental improvement.
$1bn is only 5% of the capital that OP has, so you’d only need to find a 1 grant for every 20 that OP makes that they’ve missed with only 2x the effectiveness of marginal OP grants in order to get 2x the value.
You’d need to do that consistently (no misses, unless counteracted by >2x grants) and efficiently (as incurring similar overhead as OP with $1B of assets would consume much of the available cash flow). That seems like a tall order.
Moreover, I’m not sure if a model in which the new major funder always gets to act “last” would track reality very well. It’s likely that OP would change its decisions, at least to some extent, based on what it expected the other funder to do. In this case, the new funder would end up funding a significant amount of stuff that OP would have counterfactually funded.
It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you’d already be getting significant benefits within a cause.
Agree you’d need to average 2x for the last point to work.
Though note the three pathways to impact—talent, intellectual diversity, OP gaps—are mostly independent, so you’d only need one of them to work.
Also agree in practice there would be some funging between the two, which would limit the differences, that’s a good point.
I find it plausible that a strong fix to the funder-diversity problem could increase the value of the talent pool by 10% or even more. However, having a new independent funder with $1B in assets (spending much less than that per year) feels more like an incremental improvement.
You’d need to do that consistently (no misses, unless counteracted by >2x grants) and efficiently (as incurring similar overhead as OP with $1B of assets would consume much of the available cash flow). That seems like a tall order.
Moreover, I’m not sure if a model in which the new major funder always gets to act “last” would track reality very well. It’s likely that OP would change its decisions, at least to some extent, based on what it expected the other funder to do. In this case, the new funder would end up funding a significant amount of stuff that OP would have counterfactually funded.
It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you’d already be getting significant benefits within a cause.
Agree you’d need to average 2x for the last point to work.
Though note the three pathways to impact—talent, intellectual diversity, OP gaps—are mostly independent, so you’d only need one of them to work.
Also agree in practice there would be some funging between the two, which would limit the differences, that’s a good point.