Thanks, all, for the very thoughtful post and comments!
At some point this year, I hope to make a post about our general reasons for wanting to put some resources into the causes that look best according to different plausible background worldviews and epistemology. Dan Keys and Telofy touched on a lot of these reasons (especially Dan’s #3 and #4).
I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving (those relating to farm animal suffering and direct existential risk) as massively and clearly better than others, with high certainty. If we agreed, our approach would be much more similar to what Michael suggests than it is now. We have big uncertainty about our cost-effectiveness estimates, especially as they pertain to issues like flow-through effects. I’ll note that I’ve followed some of Michael’s links but haven’t ended up updating in the direction of more certainty about things he seems to be certain of (such as how we should weigh helping animals compared to helping humans).
We do think we’ve learned a lot about how to compare causes by exploring specific grants, and we think that in the long run, our current approach will yield important option value if we end up buying into worldview/background epistemology that doesn’t match our current best guess. It’s also worth noting that our approach requires commitments to causes, so our choice of focus areas will change less frequently than our views (and with a lag).
I think our other biggest disagreement with Michael is about room for more funding. We are still ramping up knowledge and capacity and have certainly not maxed out what we can do in certain causes, including farm animal welfare, but I expect this to be pretty temporary. I expect that we will hit real bottlenecks to giving more pretty soon. In particular, I am highly skeptical that we could recommend $50 million with even reasonable effectiveness on potential risks from advanced artificial intelligence in the next year (though recommending smaller amounts will hopefully, over time, increase field capacity and make it possible to recommend much more later). We’re not sure yet whether we want to prioritize wild animal suffering, but I think here there is even more of a bottleneck to effective spending in the reasonably near term.
Thanks for the response, Holden. I appreciate it when you engage with public comments on GiveWell/Open Phil.
I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving [...] as massively and clearly better than others, with high certainty.
I’m probably more confident than you are about cause prioritization, but I don’t believe that’s necessary for my arguments. You just have to be weakly confident that one area is better, and that it has more room for funding than you can fill in the long term. But if you’re only weakly confident in one cause area being better than another then that makes Dan’s #3 look more compelling, so diversifying may be the right call in that case.
I’ll add that I agree with you that there’s almost certainly not $50 million worth of “shovel-ready” grants in AI safety, and definitely not in wild-animal suffering, but the problems are big enough that they could easily absorb this much funding if more people were working on the problems. Committing money to the problems is probably one of the best ways to incentivize people to work on them—Open Phil already seems to be doing this a bit with AI safety. I don’t know as much about grantmaking as you do but my understanding is that you can create giving opportunities by committing to cause areas, which was part of Open Phil’s motivation for making such commitments.
I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it’s a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we’re making today.
Thanks, all, for the very thoughtful post and comments!
At some point this year, I hope to make a post about our general reasons for wanting to put some resources into the causes that look best according to different plausible background worldviews and epistemology. Dan Keys and Telofy touched on a lot of these reasons (especially Dan’s #3 and #4).
I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving (those relating to farm animal suffering and direct existential risk) as massively and clearly better than others, with high certainty. If we agreed, our approach would be much more similar to what Michael suggests than it is now. We have big uncertainty about our cost-effectiveness estimates, especially as they pertain to issues like flow-through effects. I’ll note that I’ve followed some of Michael’s links but haven’t ended up updating in the direction of more certainty about things he seems to be certain of (such as how we should weigh helping animals compared to helping humans).
We do think we’ve learned a lot about how to compare causes by exploring specific grants, and we think that in the long run, our current approach will yield important option value if we end up buying into worldview/background epistemology that doesn’t match our current best guess. It’s also worth noting that our approach requires commitments to causes, so our choice of focus areas will change less frequently than our views (and with a lag).
I think our other biggest disagreement with Michael is about room for more funding. We are still ramping up knowledge and capacity and have certainly not maxed out what we can do in certain causes, including farm animal welfare, but I expect this to be pretty temporary. I expect that we will hit real bottlenecks to giving more pretty soon. In particular, I am highly skeptical that we could recommend $50 million with even reasonable effectiveness on potential risks from advanced artificial intelligence in the next year (though recommending smaller amounts will hopefully, over time, increase field capacity and make it possible to recommend much more later). We’re not sure yet whether we want to prioritize wild animal suffering, but I think here there is even more of a bottleneck to effective spending in the reasonably near term.
Thanks for the response, Holden. I appreciate it when you engage with public comments on GiveWell/Open Phil.
I’m probably more confident than you are about cause prioritization, but I don’t believe that’s necessary for my arguments. You just have to be weakly confident that one area is better, and that it has more room for funding than you can fill in the long term. But if you’re only weakly confident in one cause area being better than another then that makes Dan’s #3 look more compelling, so diversifying may be the right call in that case.
I’ll add that I agree with you that there’s almost certainly not $50 million worth of “shovel-ready” grants in AI safety, and definitely not in wild-animal suffering, but the problems are big enough that they could easily absorb this much funding if more people were working on the problems. Committing money to the problems is probably one of the best ways to incentivize people to work on them—Open Phil already seems to be doing this a bit with AI safety. I don’t know as much about grantmaking as you do but my understanding is that you can create giving opportunities by committing to cause areas, which was part of Open Phil’s motivation for making such commitments.
I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it’s a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we’re making today.