Thanks for the response, Holden. I appreciate it when you engage with public comments on GiveWell/Open Phil.
I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving [...] as massively and clearly better than others, with high certainty.
I’m probably more confident than you are about cause prioritization, but I don’t believe that’s necessary for my arguments. You just have to be weakly confident that one area is better, and that it has more room for funding than you can fill in the long term. But if you’re only weakly confident in one cause area being better than another then that makes Dan’s #3 look more compelling, so diversifying may be the right call in that case.
I’ll add that I agree with you that there’s almost certainly not $50 million worth of “shovel-ready” grants in AI safety, and definitely not in wild-animal suffering, but the problems are big enough that they could easily absorb this much funding if more people were working on the problems. Committing money to the problems is probably one of the best ways to incentivize people to work on them—Open Phil already seems to be doing this a bit with AI safety. I don’t know as much about grantmaking as you do but my understanding is that you can create giving opportunities by committing to cause areas, which was part of Open Phil’s motivation for making such commitments.
I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it’s a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we’re making today.
Thanks for the response, Holden. I appreciate it when you engage with public comments on GiveWell/Open Phil.
I’m probably more confident than you are about cause prioritization, but I don’t believe that’s necessary for my arguments. You just have to be weakly confident that one area is better, and that it has more room for funding than you can fill in the long term. But if you’re only weakly confident in one cause area being better than another then that makes Dan’s #3 look more compelling, so diversifying may be the right call in that case.
I’ll add that I agree with you that there’s almost certainly not $50 million worth of “shovel-ready” grants in AI safety, and definitely not in wild-animal suffering, but the problems are big enough that they could easily absorb this much funding if more people were working on the problems. Committing money to the problems is probably one of the best ways to incentivize people to work on them—Open Phil already seems to be doing this a bit with AI safety. I don’t know as much about grantmaking as you do but my understanding is that you can create giving opportunities by committing to cause areas, which was part of Open Phil’s motivation for making such commitments.
I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it’s a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we’re making today.