Just to address point (2), the comments in “EA is vetting-constrained” suggest that EA is not that vetting-constrained:
Denise Melchin of Meta Fund: “My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.… Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12).”
Claire Zabel of Open Philanthropy: “Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn’t in vetting precisely, though that’s somewhat directionally correct… Often I feel like it’s an inchoate combination of something like “a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about”
Jan Kulveit of FHI: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
One story is, then, is that EA has successfully eliminated a previous funding bottleneck for high-quality world-saving projects. Now we have a different bottleneck—the supply of high-quality world-saving projects (and people clearly capable of carrying them out).
In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you’ll always have either too much supply, too much demand, or a perception of compacency (where we’ve matched them up just right, but are disappointed that we haven’t scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.
So how do we increase the supply of high-quality world-saving projects? Well, start by factoring projects into components:
A sharp, well-evaluated, timely idea with world-saving potential that also provides the team with enough social reward they’re willing to take it on
A proven, generally competent, reliable team of experts who are available to work, committed to that idea, yet able to pivot
Adequate funding both for paying the team and funding their work
Access to outside consulting expertise
In many cases, significant political capital
Viewed from this perspective, it’s not surprising at all that increasing the supply of such projects is vastly more difficult than increasing funding. On the other hand, this gives us many opportunities to address this challenge.
Perhaps instead of adding more projects to the list, we need to sharpen up ideas for working on them. Amateur EAs need to spend less time dreaming up novel causes/projects and more time assembling teams and making concrete plans—including for their personal finances. EAs need to spend more time building up networks of experts and government workers outside the EA movement.
I imagine that amateur EAs trying to skill up might need to make some serious sacrifices in order to gain traction. For example, they might focus on building a team to execute a project, but by necessity make the project small, temporary, and cheap. They might need to do a lot of networking and take classes, just to build up general skills and contacts, without having a particular project or idea to work on. They might need to really spend time thinking through the details of plans, without actually intending to execute them.
if I had to guess, here are some things that might benefit newer EAs who are trying to skill up:
Go get an MS in a hard science to gain some skill executing concrete novel projects and working in a rigorous intellectual discipline.
Write a book and get it published, even if it’s not on anything related to EA.
Get an administrative volunteer position.
Manage a local non-EA altruistic project to improve their city.
Just to address point (2), the comments in “EA is vetting-constrained” suggest that EA is not that vetting-constrained:
I actually don’t think that this is correct.
Denise’s comment does suggest that, for the meta space specifically.
But Claire’s comment seems broadly in agreement with the “vetting-constrained” view, or at least the view that that’s one important constraint. Some excerpts:
Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn’t in vetting precisely, though that’s somewhat directionally correct. It’s more like there’s a distribution of projects, and we’ve picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don’t know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren’t focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. [...] Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.” [emphasis added]
And Jan Kulveit’s comment is likewise more mixed.
And several other comments mostly just agree with the “vetting-constrained” view. (People can check it out themselves.)
Of course, this doesn’t prove that EA is vetting-constrained—I’m just contesting the specific claim that “the comments” on that post “suggest that EA is not that vetting-constrained”. (Though I also do think that vetting is one key constraint in EA, and I have some additional evidence for that that’s independent of what’s already in that post and the comments there, which I could perhaps try expand on if people want.)
In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you’ll always have either too much supply, too much demand, or a perception of compacency (where we’ve matched them up just right, but are disappointed that we haven’t scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.
I think there’s valuable something to this point, but I don’t think it’s quite right.
In particular, I think it implies the only relevant type of “demand” is that coming from funders etc., whereas I’d want to frame this in terms of ways the world could be improved. I’m not sure we could ever reach a perfect world where it seems there’s 0 room for additional impactful acts, but we could clearly be in a much better world where the room/need for additional impactful act is smaller and less pressing.
Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what there’s an undersupply of at the moment and thinking about how to address that. The point isn’t to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/finding/using that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Todd’s comments on this sort of thing.
In particular, I think it implies the only relevant type of “demand” is that coming from funders etc., whereas I’d want to frame this in terms of ways the world could be improved.
My position is that “demand” is a word for “what people will pay you for.” EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is “free riding” on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn’t generate much additional supply. This is the problem we’re exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
My position is that “demand” is a word for “what people will pay you for.”
This seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to “demand” in the sense of “how much will people pay for people to work on this”, and “supply” in the sense of “people who are willing and able to work on this if given money”, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(I’m not sure if I’ve expressed myself well here. I basically just have a sense that the framing you’ve used isn’t clearly highlighting all the key things in a productive way. But I’m not sure there are actual any interesting, major disagreements here.)
Your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
In the context of the EA forum, I don’t think it’s necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let’s say in a given year):
Grantmakers run out of money and aren’t able to fund all high-quality EA projects.
Grantmakers have extra money, and don’t have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of “high quality EA project” that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they’re facing.
For the rest, I’d say that there’s a difference between “willingness to work” and “likelihood of success.” We’re interested in the reasons for EA project supply inelasticity. Why aren’t grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them aren’t motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we’d see an increase in supply.
An alternative possibility is that high-quality ideas/teams are rare right now, and can’t be had at any price grantmakers are willing or able to pay.
I think it’s not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we’re not in that situation vetting is still a constraint—it’s not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are “actually relatively low EV” (in a way that could’ve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake.
I think maybe there’s been some confusion where you’re thinking I’m saying grantmakers have “too high a bar”? I’m not saying that. (I’m agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
Unfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable — we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20
As such, our AI governance grantmaking tends to focus on…
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).
Multiple comments from multiple fund managers on the EA Infrastructure Fund’s recent Ask Us Anything strongly suggest they also believe there are strong vetting constraints (even if other constraints also matter a lot).
So I’m confident that the start of your comment is incorrect in an important way about an important topic. I think I was already confident of this due to the very wide array of other indications that there are strong vetting constraints, the fact that the quotes you mention don’t really indicate that “EA is not that vetting-constrained” (with the exception of Denise’s comment and the meta space specifically), and the fact that other comments on the same post you’re quoting comments from suggest EA is quite vetting constrained. (See my other replies for details.) But this new batch of evidence reminded me of this and made the incorrectness more salient.
I’ve therefore given your comment a weak downvote. I think it’d be better if it had lower karma because I think the comment would mislead readers about an important thing (and the high karma will lend it more credence). But you were writing in good faith, you were being polite, and other things you said in the comment were more reasonable, so I refrained from a strong downvote.
(But I feel a little awkward/rude about this, hence the weird multi-paragraph explanation.)
To be clear, I agree that “vetting” isn’t the only key bottleneck or the only thing worth increasing or improving, and that things like having more good project ideas, better teams to implement them, more training and credentials, etc. can all be very useful too. And I think it’s useful to point this out.
In fact, my second section was itself only partly about vetting:
There are many orgs and funders who would be willing and able to hire or fund people to do such research if there were people who the orgs/funders could trust would do it well (and without requiring too much training or vetting). But not if the people are inexperienced, are choosing lower-priority questions, or are hard for orgs/funders to assess the skills of (see also EA is vetting-constrained and Ben Todd discussing organizational capacity, infrastructure, and management bottlenecks). [emphasis shifted]
(I also notice that some of your points sound more applicable to non-research careers. Such careers aren’t the focus of this sequence, though they’re of course important too, and I think some of my analysis is relevant to them too and it can be worth discussing them in the comments.)
Just to address point (2), the comments in “EA is vetting-constrained” suggest that EA is not that vetting-constrained:
Denise Melchin of Meta Fund: “My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.… Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12).”
Claire Zabel of Open Philanthropy: “Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn’t in vetting precisely, though that’s somewhat directionally correct… Often I feel like it’s an inchoate combination of something like “a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about”
Jan Kulveit of FHI: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
One story is, then, is that EA has successfully eliminated a previous funding bottleneck for high-quality world-saving projects. Now we have a different bottleneck—the supply of high-quality world-saving projects (and people clearly capable of carrying them out).
In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you’ll always have either too much supply, too much demand, or a perception of compacency (where we’ve matched them up just right, but are disappointed that we haven’t scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.
So how do we increase the supply of high-quality world-saving projects? Well, start by factoring projects into components:
A sharp, well-evaluated, timely idea with world-saving potential that also provides the team with enough social reward they’re willing to take it on
A proven, generally competent, reliable team of experts who are available to work, committed to that idea, yet able to pivot
Adequate funding both for paying the team and funding their work
Access to outside consulting expertise
In many cases, significant political capital
Viewed from this perspective, it’s not surprising at all that increasing the supply of such projects is vastly more difficult than increasing funding. On the other hand, this gives us many opportunities to address this challenge.
Perhaps instead of adding more projects to the list, we need to sharpen up ideas for working on them. Amateur EAs need to spend less time dreaming up novel causes/projects and more time assembling teams and making concrete plans—including for their personal finances. EAs need to spend more time building up networks of experts and government workers outside the EA movement.
I imagine that amateur EAs trying to skill up might need to make some serious sacrifices in order to gain traction. For example, they might focus on building a team to execute a project, but by necessity make the project small, temporary, and cheap. They might need to do a lot of networking and take classes, just to build up general skills and contacts, without having a particular project or idea to work on. They might need to really spend time thinking through the details of plans, without actually intending to execute them.
if I had to guess, here are some things that might benefit newer EAs who are trying to skill up:
Go get an MS in a hard science to gain some skill executing concrete novel projects and working in a rigorous intellectual discipline.
Write a book and get it published, even if it’s not on anything related to EA.
Get an administrative volunteer position.
Manage a local non-EA altruistic project to improve their city.
Volunteer on some political campaigns.
I actually don’t think that this is correct.
Denise’s comment does suggest that, for the meta space specifically.
But Claire’s comment seems broadly in agreement with the “vetting-constrained” view, or at least the view that that’s one important constraint. Some excerpts:
And Jan Kulveit’s comment is likewise more mixed.
And several other comments mostly just agree with the “vetting-constrained” view. (People can check it out themselves.)
Of course, this doesn’t prove that EA is vetting-constrained—I’m just contesting the specific claim that “the comments” on that post “suggest that EA is not that vetting-constrained”. (Though I also do think that vetting is one key constraint in EA, and I have some additional evidence for that that’s independent of what’s already in that post and the comments there, which I could perhaps try expand on if people want.)
I think there’s valuable something to this point, but I don’t think it’s quite right.
In particular, I think it implies the only relevant type of “demand” is that coming from funders etc., whereas I’d want to frame this in terms of ways the world could be improved. I’m not sure we could ever reach a perfect world where it seems there’s 0 room for additional impactful acts, but we could clearly be in a much better world where the room/need for additional impactful act is smaller and less pressing.
Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what there’s an undersupply of at the moment and thinking about how to address that. The point isn’t to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/finding/using that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Todd’s comments on this sort of thing.
My position is that “demand” is a word for “what people will pay you for.” EA exists for a couple reasons:
Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is “free riding” on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.
Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn’t generate much additional supply. This is the problem we’re exploring here.
The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.
This seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they’ll pretty much always not be in perfect equilibrium, and say “None of those problems indicate that something is wrong”, without noting that the thing that’s wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.
I think I sort-of agree with your other two points, but I think they seem to constrain the focus to “demand” in the sense of “how much will people pay for people to work on this”, and “supply” in the sense of “people who are willing and able to work on this if given money”, whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things.
(I’m not sure if I’ve expressed myself well here. I basically just have a sense that the framing you’ve used isn’t clearly highlighting all the key things in a productive way. But I’m not sure there are actual any interesting, major disagreements here.)
In the context of the EA forum, I don’t think it’s necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let’s say in a given year):
Grantmakers run out of money and aren’t able to fund all high-quality EA projects.
Grantmakers have extra money, and don’t have enough high-quality EA projects to spend it on.
Grantmakers have exactly enough money to fund all high-quality EA projects.
None of these situations indicate that something is wrong with the definition of “high quality EA project” that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.
No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they’re facing.
For the rest, I’d say that there’s a difference between “willingness to work” and “likelihood of success.” We’re interested in the reasons for EA project supply inelasticity. Why aren’t grantmakers finding high-expected-value projects when they have money to spend?
One possibility is that projects and teams to work on them aren’t motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we’d see an increase in supply.
An alternative possibility is that high-quality ideas/teams are rare right now, and can’t be had at any price grantmakers are willing or able to pay.
I think it’s not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we’re not in that situation vetting is still a constraint—it’s not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.
Relatedly, there could be a problem of grantmakers giving to things that are “actually relatively low EV” (in a way that could’ve been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that).
I think maybe there’s been some confusion where you’re thinking I’m saying grantmakers have “too high a bar”? I’m not saying that. (I’m agnostic on the question, and would expect it differs between grantmakers.)
Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don’t feel they have room to grow in terms of determining the expected value of the projects they’re looking at. Very prepared to change my mind on this; I’m literally just going from the quotes in the context of the post to which they were responding.
Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they’ve been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.
Oh, I definitely don’t think that grantmakers are already doing the best that could be done at determining the EV of projects. And I’d be surprised if any EA grantmaker thought that that was the case, and I don’t think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn’t quite “vetting”, which is not the same as the claim that there’d be zero value in increasing or improving vetting capacity.
Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: “as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts… Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.”
I also think that “doing the best they can at determining EV of projects” implies that the question is just whether the grantmakers’ EV assessments are correct. But what’s often happening is more like they either don’t hear about something or (in a sense) they “don’t really make an EV assessment”—because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low).
I think there’s ample evidence that these things happen, and it’s obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.
(None of this is intended as an insult to grantmakers. I’m not saying they’re “doing a bad job”, but rather simply the very weak and common-sense claim that they aren’t already picking only and all the highest EV projects, partly because there aren’t enough of the grantmakers to do all the evaluations, partly because some projects don’t come to their attention, partly because some projects haven’t yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply “lower their bar”.)
For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here’s a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:
…research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
[and various other things]
So this is a case where a sort of “vetting bottleneck” could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that’s clearly the case in probably all EA domains (though note that I’m not claiming this is the biggest bottleneck in all domains).
Multiple comments from multiple fund managers on the EA Infrastructure Fund’s recent Ask Us Anything strongly suggest they also believe there are strong vetting constraints (even if other constraints also matter a lot).
So I’m confident that the start of your comment is incorrect in an important way about an important topic. I think I was already confident of this due to the very wide array of other indications that there are strong vetting constraints, the fact that the quotes you mention don’t really indicate that “EA is not that vetting-constrained” (with the exception of Denise’s comment and the meta space specifically), and the fact that other comments on the same post you’re quoting comments from suggest EA is quite vetting constrained. (See my other replies for details.) But this new batch of evidence reminded me of this and made the incorrectness more salient.
I’ve therefore given your comment a weak downvote. I think it’d be better if it had lower karma because I think the comment would mislead readers about an important thing (and the high karma will lend it more credence). But you were writing in good faith, you were being polite, and other things you said in the comment were more reasonable, so I refrained from a strong downvote.
(But I feel a little awkward/rude about this, hence the weird multi-paragraph explanation.)
Looking forward to hearing about those vetting constraints! Thanks for keeping the conversation going :)
To be clear, I agree that “vetting” isn’t the only key bottleneck or the only thing worth increasing or improving, and that things like having more good project ideas, better teams to implement them, more training and credentials, etc. can all be very useful too. And I think it’s useful to point this out.
In fact, my second section was itself only partly about vetting:
(I also notice that some of your points sound more applicable to non-research careers. Such careers aren’t the focus of this sequence, though they’re of course important too, and I think some of my analysis is relevant to them too and it can be worth discussing them in the comments.)