Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
Unfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable — we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20
As such, our AI governance grantmaking tends to focus on…
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).