Do you (or anyone else) have any views about the circumstances under which a peer-review process is most likely to come back as cost-effective? If this is trialed, it would be worthwhile to trial it in a set of circumstances where it has its best chance of proving its value.
For instance, you mentioned EA Funds at one point as a grantmaker who might benefit, I do not think that would not be the right place to run a trial due to their relatively small grant sizes. I don’t think seeing what peer review accomplishes on grants that commonly run in the five to perhaps low-six figure range would give it the best chance to prove itself. But others might disagree!
At a pinch, I would say review might be more worthwhile for topics where the work builds on a well-developed but pre-existing body of research. So, funding a graduate to take time to learn about AI Safety full-time as a bridge to developing a project probably wouldn’t benefit from a review, but an application to develop a very specific project based on a specific idea probably would.
I don’t have a sense on how often five-to-low-six-figure grants involve very specific ideas. If you told me they usually don’t, I would definitely update against thinking a peer review would be useful in those circumstances.
I have no idea, to be honest. My belief that smaller grants might not be the best trial run for cost-effectiveness is based more on assumptions that (1) highly qualified reviewers might not think reviewing grants in that range is an effective use of their time; and (2) very quick reviews are likely to identify only clearly erroneous exercises of grantmaking discretion. Either assumption could be wrong!
But I think at that grant size, the cost-effectiveness profile might be more favorable for a system of peer review under specified circumstances rather than as a automatic practice. Knowing that they were only being asked when there was a greater chance their assistance might be outcome-determinative might help with attracting quality reviewers too.
Thanks for writing this, Ben!
Do you (or anyone else) have any views about the circumstances under which a peer-review process is most likely to come back as cost-effective? If this is trialed, it would be worthwhile to trial it in a set of circumstances where it has its best chance of proving its value.
For instance, you mentioned EA Funds at one point as a grantmaker who might benefit, I do not think that would not be the right place to run a trial due to their relatively small grant sizes. I don’t think seeing what peer review accomplishes on grants that commonly run in the five to perhaps low-six figure range would give it the best chance to prove itself. But others might disagree!
At a pinch, I would say review might be more worthwhile for topics where the work builds on a well-developed but pre-existing body of research. So, funding a graduate to take time to learn about AI Safety full-time as a bridge to developing a project probably wouldn’t benefit from a review, but an application to develop a very specific project based on a specific idea probably would.
I don’t have a sense on how often five-to-low-six-figure grants involve very specific ideas. If you told me they usually don’t, I would definitely update against thinking a peer review would be useful in those circumstances.
I have no idea, to be honest. My belief that smaller grants might not be the best trial run for cost-effectiveness is based more on assumptions that (1) highly qualified reviewers might not think reviewing grants in that range is an effective use of their time; and (2) very quick reviews are likely to identify only clearly erroneous exercises of grantmaking discretion. Either assumption could be wrong!
But I think at that grant size, the cost-effectiveness profile might be more favorable for a system of peer review under specified circumstances rather than as a automatic practice. Knowing that they were only being asked when there was a greater chance their assistance might be outcome-determinative might help with attracting quality reviewers too.