I’m intermittently working on a project to provide more scaleable and higher quality feedback for project proposals for several months. First alpha-stage test should start in a time-horizon of weeks and I’ll likely post the draft of the proposal soon.
Very rough reply … the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement (as far as we are discussing projects oriented on long-term, meta, AI alignment, and similar). Adding people to the funding organisations would help a bit, but less than you would expect: the problem is, for evaluating e.g. somewhat meta- oriented startup trying to do also something about AI alignment, as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts. (If the proposal is sufficiently ambitious or complex or both, even junior domain experts would be hesitant to endorse it.) Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.
edit: To gesture toward the solution … the main thing the proposed system will try to do is “amplify” the precious experts. For some ideas how you can do it see Ozzie’s posts, other ideas can be ported from academic peer review, other from anything-via-debate.
[meta: I’m curious, why was this posted anonymously?]
Sounds good! Glad to hear that this is being worked on.
why was this posted anonymously?
I didn’t want to jeopardize the projects I’m associated with by criticizing those that might fund it. It’s not that I expect a bad reaction, but the stakes were too high. Especially because I would be taking a risk on behalf of other people.
I’m more than happy to reveal my identity if this post works out well, and/or if there’s a reason that anonymity is a bad rule.
This post seems very insightful to me, and it seems like it worked out very well in terms of upvotes (and it seems like it would increase your chances of getting funding)? I’d be interested to learn who wrote this, but of course no need to say if you prefer not to. :)
I worked on this problem for a few years and agree that it’s a bottleneck just in EA, but globally. I do think that the work on prediction is one potential “solution”, but there are additional problems with getting people to actually adopt solutions. The incentives for the people in power to change to a solution that gives them less power is low, and there are lots of evolutionary pressures that lead to the current vetting procedures. I’d love to talk more to you about this as I’m working on similar things, although have moved away from this exact problem.
Very rough reply … the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement
This makes sense and leads to me to somewhat downgrade my enthusiasm for my “Earn to Learn To Vett” comment (although I suspect it’s still good on the margin)
I am unclear on whether or not the main constraint of evaluating EA projects in general is the “time of senior people with domain expertise.” For-profit venture capitalists are usually not the world’s leading experts in a particular area. Domain familiarity is valuable, but it does not seem like a “senior” or “expert” level of domain knowledge is all that helpful in assessing the likelihood of something succeeding or not. Like VCs, many EA funders I’ve spoken with rely strongly on factors that do not require a high level of domain familiarity to determine whether or not to fund a project, such as the strength of the founding team. Some amount of domain expertise may be helpful in evaluating certain types of highly complex or research-heavy projects, but most of the projects that I’ve seen and that other funders are funding do not seem to involve this level of deep domain complexity.
I’m intermittently working on a project to provide more scaleable and higher quality feedback for project proposals for several months. First alpha-stage test should start in a time-horizon of weeks and I’ll likely post the draft of the proposal soon.
Very rough reply … the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement (as far as we are discussing projects oriented on long-term, meta, AI alignment, and similar). Adding people to the funding organisations would help a bit, but less than you would expect: the problem is, for evaluating e.g. somewhat meta- oriented startup trying to do also something about AI alignment, as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts. (If the proposal is sufficiently ambitious or complex or both, even junior domain experts would be hesitant to endorse it.) Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.
edit: To gesture toward the solution … the main thing the proposed system will try to do is “amplify” the precious experts. For some ideas how you can do it see Ozzie’s posts, other ideas can be ported from academic peer review, other from anything-via-debate.
[meta: I’m curious, why was this posted anonymously?]
Sounds good! Glad to hear that this is being worked on.
I didn’t want to jeopardize the projects I’m associated with by criticizing those that might fund it. It’s not that I expect a bad reaction, but the stakes were too high. Especially because I would be taking a risk on behalf of other people. I’m more than happy to reveal my identity if this post works out well, and/or if there’s a reason that anonymity is a bad rule.
This post seems very insightful to me, and it seems like it worked out very well in terms of upvotes (and it seems like it would increase your chances of getting funding)? I’d be interested to learn who wrote this, but of course no need to say if you prefer not to. :)
Thanks! I wrote it :)
I worked on this problem for a few years and agree that it’s a bottleneck just in EA, but globally. I do think that the work on prediction is one potential “solution”, but there are additional problems with getting people to actually adopt solutions. The incentives for the people in power to change to a solution that gives them less power is low, and there are lots of evolutionary pressures that lead to the current vetting procedures. I’d love to talk more to you about this as I’m working on similar things, although have moved away from this exact problem.
This makes sense and leads to me to somewhat downgrade my enthusiasm for my “Earn to Learn To Vett” comment (although I suspect it’s still good on the margin)
I am unclear on whether or not the main constraint of evaluating EA projects in general is the “time of senior people with domain expertise.” For-profit venture capitalists are usually not the world’s leading experts in a particular area. Domain familiarity is valuable, but it does not seem like a “senior” or “expert” level of domain knowledge is all that helpful in assessing the likelihood of something succeeding or not. Like VCs, many EA funders I’ve spoken with rely strongly on factors that do not require a high level of domain familiarity to determine whether or not to fund a project, such as the strength of the founding team. Some amount of domain expertise may be helpful in evaluating certain types of highly complex or research-heavy projects, but most of the projects that I’ve seen and that other funders are funding do not seem to involve this level of deep domain complexity.