This is a response to recent posts including Doing EA Better and The EA community does not own its donors’ money. In order to make better funding decisions, some EAs have called for democratizing EA’s funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: “how do we decide who gets a vote?” and “would funders still give if they were forced to follow community preferences”? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.
To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders’ prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.
In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.
In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.
The grant peer review process at the NIH and NSF
National Institutes of Health
The National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.
When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.
The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.
In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer review process. The study section members provide written critiques, scores, and summaries of the applications, which are used to determine which applications will be funded.
National Science Foundation
Relative to the NIH, The National Science Foundation (NSF) has a remit to fund broader, basic scientific research that does not necessarily have immediate applications.
The NSF use a peer-review process similar to the National Institutes of Health (NIH) to evaluate grant applications. However, there are some key differences. For instance, the NSF review process generally has a broader scope of expertise required and it allows for a multidisciplinary approach to the review of proposals. Additionally, the NSF review process is typically done in two rounds and also includes a “broader impact” criteria, which evaluates the potential impact of the proposed research on society as a whole.
Analogies to the EA context
Very roughly speaking, in an EA context, we can imagine the NIH to have a remit more similar to EA Global Health and Development and Animal Welfare causes. Their outcomes can often be measured quantitatively, even if they cannot always be quantitatively compared against one another. In contrast, the NSF might have a remit more similar to existential risk causes, where the targets, while important, could have causes that are less quantifiably related to the outcome of reducing existential risk, such as improving democratic decision-making.
Why the time is right to adopt peer-review in grant-making processes
A short history of the effective altruism funding environment
The following is an outsiders’ account, and it may well be wrong, but seems likely to track some of the dynamics of the funding situation in EA, in my opinion.
From around 2015, to the establishment of the FTX Foundation around 2021, EA Funds had access to a substantial amount of funding, but the movement overall was still “funding constrained”, i.e., there was more work to do and more people to do it than there was funding available to do it. At the same time, the community was fairly small, and aside from a few well-developed areas, such as AI Safety, there wasn’t a large community of people with more credible knowledge of various topics than existed in grantmaking organizations.
When the FTX Foundation opened for business, SBF said something like he felt morally obligated to spend at least 5% of his net worth per year; otherwise, are you really a credible billionaire effective altruist? At one time valued well over $10b, that would mean spending at least $500m per year. Effective altruism was now “talent constrained”, i.e., a lot of money had to be spent very quickly. That meant it really could have seemed suboptimal to create extra systems that could slow down decision-making.
In late 2022, funders including OpenPhil have said the bar must be raised on funding–it once again seems like funding is constrained. However, unlike in our previous funding-constrained environment, we have many more effective altruist organizations, affiliated independent and academic researchers, and other experts who sit outside of funding organizations but could provide relative expertise.
People in the community who could participate in peer review
EA Funds could benefit from adopting a peer review process similar to that used by the NIH or NSF. In the past, a peer review process may have been difficult due to a lack of established experts in the field of effective altruism. However, as the community has grown, there are now enough established experts who could be called upon to review applications.
The effective altruism community has grown significantly in recent years. The growth in the community has led to an increase in the number of independent researchers and academics who have developed expertise in specific areas relevant to effective altruism. Additionally, the growth of the community has led to the formation of established effective altruism organizations, which employ experts who work on specific causes or topics. So there are a variety of people who might all have important perspectives to share:
Specialists within established EA research organizations
Academics within the EA community
Academics outside the EA community with expertise in areas of interest to EA funders
Independent researchers in the EA community who funders recognize as particularly knowledgeable in a topic
Experts in all of these areas can serve as potential peer reviewers and provide valuable evaluations of grant applications.
Implementation
While there are already existing funding models, and this post has pointed to the NIH and NSF models in particular, there are undoubtedly differences in the EA landscape which need to be considered.
Funders could decide for themselves what point along a spectrum from “fast” to “thorough” they’d like to be. No peer review at all is at the “fast” end of the spectrum. Perhaps slightly past that is a grant-maker calling a couple of friends who know something more about the topic for their opinion. A Google Form with a 10-minute completion time per grant, sent to three carefully chosen experts with a request for comment and a rating, would be a little more thorough still. At the other end of the scale, a study section of experts could sit down over a video call or in person to review a set of grants after having independently read and reviewed them and come to a collective recommendation about the grants they recommend.
There are two particular issues I think need particularly careful thought.
First, while selecting from a set of existing, established researchers leverages existing expertise, it does run the risk of allowing cliques of experts to capture funding interests. A funder might build in institutional counterbalances to this epistemic risk by regularly seeking contributors from outside existing groups of experts.
Second, while the EA community is significantly bigger than it was a few years ago, it remains small enough for significant forms of corruption and gamesmanship in grant-making (“I’ll support your application if you support mine!”). It would make sense to impose strict institutional safeguards to maintain degrees of separation between reviewers and recipients wherever possible, for any potential conflicts of interest to be disclosed, and for violations of these safeguards to be met with appropriate penalties, such as exclusion from reviewing future funding rounds.
Will it help?
I’m not sure. In theory, I expect that the average grant application carries some strengths and weaknesses, and that the average grant reviewer will miss some of the strengths and weaknesses. Reviewers each with independent perspectives will tend to catch different strengths and weaknesses. By increasing the number of well-informed reviewers, the review process will, on average, identify more strengths in weaknesses.
In practice, I don’t have the evidence to demonstrate this will work. If you nodded along with the Doing EA Better authors when they said
We need to value expertise and rigour more
or
EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down
or
EAs should be more open to institutions and community groups being run democratically or non-hierarchically
then you might like the proposed funding model because it more highly values expertise, or because facilitates EA grant-making institutions co-creating EA ideas (grant application choices) with more community members, or because it is a less hierarchical model than the status quo.
Why would we want to emulate government peer review?
We shouldn’t. We should create our own model!
If you’re worried that a review process could be too slow and cumbersome, perhaps you’d agree that an initial implementation by a small regranter, or an implementation with a very light-weight review process (perhaps a quick Google form filled out a small group of experts and reviewed by a grant-maker) would not carry a substantial cost, while at the same time, allowing our movement to learn whether such a system would be helpful to more widely adopt.
If you’re worried that governments are the last group of organizations we should seek to emulate, I’d suggest that perhaps, to the extent government grant-making institutions are inefficient, it’s because of the constrained policy environment they exist within, rather than something inherent in the process of peer-reviewed grant-making.
If you’re worried that peer review on the whole is a broken model, I sympathize. On the other hand, consider that (analogous to government grant-making), the problem isn’t inherent in the practice of peer review; it’s the specific form academic peer review has evolved to given the incentive space typical academic peer review exists within.
If you’re a critic of government, or of academic peer review, ask yourself whether your objections to the government and to peer review come from the practice of asking knowledgeable people their opinion about things, or if it is something else inherent in the way that governments and academia work.
How funders can maintain the prerogative to donate money as they choose
On this forum, various people have argued that funders’ money is not owned by the community, and the community doesn’t have the right to tell funders how to spend their money. On the other hand, one comment ‘bit the bullet’, and said that while the community may not have the right to dictate to funders, funders do have a responsibility to spend their money in a way that does the most good. This is arguably a foundational tenet of effective altruism.
By asking expert reviewers to rate and select grant applications, funders can leverage the community’s expertise to better achieve their own priorities. While they would be giving up some control, the reality is that many funders are looking for good, reliable, and trustworthy advice about how to achieve their objectives.
There is always a trade-off with decision-making, and it may turn out that the cost in time and money for establishing a peer-reviewed grant process does not improve granting enough to justify the cost.
But peer reviewing doesn’t have to be laborious process; it can be almost as brief or extensive as you like. There’s a trade-off between transparency, better-informed decision-making, and decision-making efficiency, and I suspect the optimal point, from an impact perspective, is somewhere between the two extremes of “no outside feedback” and “NIH-level study sections”.
There are a couple of ways funding organizations would maintain control over their donations. First, while grant review panels give grants ratings and make recommendations, there’s no reason funding organizations couldn’t make the final decision on funding—in fact they probably should do. Second, funding organizations set the scope of grant review panels and choose when to use them.
Funders might decide on a set amount of money for allocated to each of existential risk and global health and development, perhaps along “worldview diversification” lines, while allowing review panels to set priorities within each cause area. This could be quite granular, for instance, asking an existential risk review panel to evaluate the best grants for improving democratic decision-making, or attracting talent to prosaic AI alignment. At the other end of decision-making—it would also be possible to set up a study section of priorities researchers to determine how much funding each worldview or each cause area would be given. It remains in the funders hands to decide how to spend money.
Conclusion
I won’t pretend this solution would fully address the concerns from the Doing EA Better post. This is not a democratizing solution in the sense of allowing community members to vote. But I hope it might be a useful solution for some of the problems outlined in that post and address some of the priorities expressed. Specifically, this proposal would decentralize decision-making and more highly value expertise and rigor.
Acknowledgements
Thanks to Justis from the EA Forum team and Bruce Tsai for helpful comments. All errors and bad takes are solely my own, and this post represents no one’s views except my own lightly held ideas.
Doing EA Better: grant-makers should consider grant app peer review along the public-sector model
Epistemic status: speculative
This is a response to recent posts including Doing EA Better and The EA community does not own its donors’ money. In order to make better funding decisions, some EAs have called for democratizing EA’s funding systems. This can be problematic, because others have raised questions such as (paraphrasing heavily) such as: “how do we decide who gets a vote?” and “would funders still give if they were forced to follow community preferences”? The same EAs have argued that EA decision-making is “highly centralised, opaque, and unaccountable”, and said that to improve our impact on the world, the effective altruism movement should be more decentralized and there should be greater transparency amongst EA institutions.
To meet both of the families of concerns expressed over the last week, I propose a grant-assessment system that improves transparency, decentralizes decision-making, and could better inform grant allocation by drawing information from a wider section of the community whilst maintaining funders’ prerogatives to select the areas they wish to donate to. The proposal is to adopt a peer-review process used by the grant-making system run by public bodies in the United States, such as the National Institutes of Health and the National Science Foundation.
In this model, the funder’s program manager makes decisions about grant awards based on reviews and numerical scores allocated by peer reviewers coordinating in expert panels to evaluate grant applications. This would be a positive-sum change that benefits both funders and the community: the community has more input into the grant-making process, and funders benefit from expertise in the community to better achieve their objectives.
In the rest of this post, I will describe the National Institutes of Health grant evaluation process, describe why I think now is the right time for the effective altruism movement to consider peer review as part of a more mature grant evaluation process, give some notes on implementation in EA specifically, and describe how this approach can both maintain funders’ prerogative to spend their own money as they wish, while giving the community a greater level of decision-making.
The grant peer review process at the NIH and NSF
National Institutes of Health
The National Institutes of Health (NIH) uses a peer review process to evaluate grant applications. This process involves the formation of ‘study sections’, which are groups of experts in the relevant field who review and evaluate grant applications.
When an application is received, it is assigned to a study section based on its scientific area of focus. Each study section is composed of scientists, physicians, and other experts who have experience in the field related to the research proposed in the application. These would be drawn from the scientific community at large. Study section members are typically compensated for participation, but participation isn’t a full time job–it’s generally a small additional duty researchers can choose to take on, as part of their broader set of research activities.
The study section members more-or-less independently review the applications and provide written critiques that are used to evaluate the strengths and weaknesses of each application. The study section then meets to discuss the applications, and each member provides a priority score and written summary of the application. These scores and summaries are used to determine which applications will be funded.
In summary, the NIH uses study sections composed of experts in the relevant field to review and evaluate grant applications through a peer review process. The study section members provide written critiques, scores, and summaries of the applications, which are used to determine which applications will be funded.
National Science Foundation
Relative to the NIH, The National Science Foundation (NSF) has a remit to fund broader, basic scientific research that does not necessarily have immediate applications.
The NSF use a peer-review process similar to the National Institutes of Health (NIH) to evaluate grant applications. However, there are some key differences. For instance, the NSF review process generally has a broader scope of expertise required and it allows for a multidisciplinary approach to the review of proposals. Additionally, the NSF review process is typically done in two rounds and also includes a “broader impact” criteria, which evaluates the potential impact of the proposed research on society as a whole.
Analogies to the EA context
Very roughly speaking, in an EA context, we can imagine the NIH to have a remit more similar to EA Global Health and Development and Animal Welfare causes. Their outcomes can often be measured quantitatively, even if they cannot always be quantitatively compared against one another. In contrast, the NSF might have a remit more similar to existential risk causes, where the targets, while important, could have causes that are less quantifiably related to the outcome of reducing existential risk, such as improving democratic decision-making.
Why the time is right to adopt peer-review in grant-making processes
A short history of the effective altruism funding environment
The following is an outsiders’ account, and it may well be wrong, but seems likely to track some of the dynamics of the funding situation in EA, in my opinion.
From around 2015, to the establishment of the FTX Foundation around 2021, EA Funds had access to a substantial amount of funding, but the movement overall was still “funding constrained”, i.e., there was more work to do and more people to do it than there was funding available to do it. At the same time, the community was fairly small, and aside from a few well-developed areas, such as AI Safety, there wasn’t a large community of people with more credible knowledge of various topics than existed in grantmaking organizations.
When the FTX Foundation opened for business, SBF said something like he felt morally obligated to spend at least 5% of his net worth per year; otherwise, are you really a credible billionaire effective altruist? At one time valued well over $10b, that would mean spending at least $500m per year. Effective altruism was now “talent constrained”, i.e., a lot of money had to be spent very quickly. That meant it really could have seemed suboptimal to create extra systems that could slow down decision-making.
In late 2022, funders including OpenPhil have said the bar must be raised on funding–it once again seems like funding is constrained. However, unlike in our previous funding-constrained environment, we have many more effective altruist organizations, affiliated independent and academic researchers, and other experts who sit outside of funding organizations but could provide relative expertise.
People in the community who could participate in peer review
EA Funds could benefit from adopting a peer review process similar to that used by the NIH or NSF. In the past, a peer review process may have been difficult due to a lack of established experts in the field of effective altruism. However, as the community has grown, there are now enough established experts who could be called upon to review applications.
The effective altruism community has grown significantly in recent years. The growth in the community has led to an increase in the number of independent researchers and academics who have developed expertise in specific areas relevant to effective altruism. Additionally, the growth of the community has led to the formation of established effective altruism organizations, which employ experts who work on specific causes or topics. So there are a variety of people who might all have important perspectives to share:
Specialists within established EA research organizations
Academics within the EA community
Academics outside the EA community with expertise in areas of interest to EA funders
Independent researchers in the EA community who funders recognize as particularly knowledgeable in a topic
Experts in all of these areas can serve as potential peer reviewers and provide valuable evaluations of grant applications.
Implementation
While there are already existing funding models, and this post has pointed to the NIH and NSF models in particular, there are undoubtedly differences in the EA landscape which need to be considered.
Funders could decide for themselves what point along a spectrum from “fast” to “thorough” they’d like to be. No peer review at all is at the “fast” end of the spectrum. Perhaps slightly past that is a grant-maker calling a couple of friends who know something more about the topic for their opinion. A Google Form with a 10-minute completion time per grant, sent to three carefully chosen experts with a request for comment and a rating, would be a little more thorough still. At the other end of the scale, a study section of experts could sit down over a video call or in person to review a set of grants after having independently read and reviewed them and come to a collective recommendation about the grants they recommend.
There are two particular issues I think need particularly careful thought.
First, while selecting from a set of existing, established researchers leverages existing expertise, it does run the risk of allowing cliques of experts to capture funding interests. A funder might build in institutional counterbalances to this epistemic risk by regularly seeking contributors from outside existing groups of experts.
Second, while the EA community is significantly bigger than it was a few years ago, it remains small enough for significant forms of corruption and gamesmanship in grant-making (“I’ll support your application if you support mine!”). It would make sense to impose strict institutional safeguards to maintain degrees of separation between reviewers and recipients wherever possible, for any potential conflicts of interest to be disclosed, and for violations of these safeguards to be met with appropriate penalties, such as exclusion from reviewing future funding rounds.
Will it help?
I’m not sure. In theory, I expect that the average grant application carries some strengths and weaknesses, and that the average grant reviewer will miss some of the strengths and weaknesses. Reviewers each with independent perspectives will tend to catch different strengths and weaknesses. By increasing the number of well-informed reviewers, the review process will, on average, identify more strengths in weaknesses.
In practice, I don’t have the evidence to demonstrate this will work. If you nodded along with the Doing EA Better authors when they said
or
or
then you might like the proposed funding model because it more highly values expertise, or because facilitates EA grant-making institutions co-creating EA ideas (grant application choices) with more community members, or because it is a less hierarchical model than the status quo.
Why would we want to emulate government peer review?
We shouldn’t. We should create our own model!
If you’re worried that a review process could be too slow and cumbersome, perhaps you’d agree that an initial implementation by a small regranter, or an implementation with a very light-weight review process (perhaps a quick Google form filled out a small group of experts and reviewed by a grant-maker) would not carry a substantial cost, while at the same time, allowing our movement to learn whether such a system would be helpful to more widely adopt.
If you’re worried that governments are the last group of organizations we should seek to emulate, I’d suggest that perhaps, to the extent government grant-making institutions are inefficient, it’s because of the constrained policy environment they exist within, rather than something inherent in the process of peer-reviewed grant-making.
If you’re worried that peer review on the whole is a broken model, I sympathize. On the other hand, consider that (analogous to government grant-making), the problem isn’t inherent in the practice of peer review; it’s the specific form academic peer review has evolved to given the incentive space typical academic peer review exists within.
If you’re a critic of government, or of academic peer review, ask yourself whether your objections to the government and to peer review come from the practice of asking knowledgeable people their opinion about things, or if it is something else inherent in the way that governments and academia work.
How funders can maintain the prerogative to donate money as they choose
On this forum, various people have argued that funders’ money is not owned by the community, and the community doesn’t have the right to tell funders how to spend their money. On the other hand, one comment ‘bit the bullet’, and said that while the community may not have the right to dictate to funders, funders do have a responsibility to spend their money in a way that does the most good. This is arguably a foundational tenet of effective altruism.
By asking expert reviewers to rate and select grant applications, funders can leverage the community’s expertise to better achieve their own priorities. While they would be giving up some control, the reality is that many funders are looking for good, reliable, and trustworthy advice about how to achieve their objectives.
There is always a trade-off with decision-making, and it may turn out that the cost in time and money for establishing a peer-reviewed grant process does not improve granting enough to justify the cost.
But peer reviewing doesn’t have to be laborious process; it can be almost as brief or extensive as you like. There’s a trade-off between transparency, better-informed decision-making, and decision-making efficiency, and I suspect the optimal point, from an impact perspective, is somewhere between the two extremes of “no outside feedback” and “NIH-level study sections”.
There are a couple of ways funding organizations would maintain control over their donations. First, while grant review panels give grants ratings and make recommendations, there’s no reason funding organizations couldn’t make the final decision on funding—in fact they probably should do. Second, funding organizations set the scope of grant review panels and choose when to use them.
Funders might decide on a set amount of money for allocated to each of existential risk and global health and development, perhaps along “worldview diversification” lines, while allowing review panels to set priorities within each cause area. This could be quite granular, for instance, asking an existential risk review panel to evaluate the best grants for improving democratic decision-making, or attracting talent to prosaic AI alignment. At the other end of decision-making—it would also be possible to set up a study section of priorities researchers to determine how much funding each worldview or each cause area would be given. It remains in the funders hands to decide how to spend money.
Conclusion
I won’t pretend this solution would fully address the concerns from the Doing EA Better post. This is not a democratizing solution in the sense of allowing community members to vote. But I hope it might be a useful solution for some of the problems outlined in that post and address some of the priorities expressed. Specifically, this proposal would decentralize decision-making and more highly value expertise and rigor.
Acknowledgements
Thanks to Justis from the EA Forum team and Bruce Tsai for helpful comments. All errors and bad takes are solely my own, and this post represents no one’s views except my own lightly held ideas.