I’ve been actively involved in EA since 2020, when I started EA Romania. In my experience, one problem that frustrates many grant applicants is the limited feedback offered by grantmakers. In 2022, at the EAG in London, while trying to get more detailed feedback regarding my own application at the EAIF office hours, I realized that many other people had similar complaints. EAIF’s response seemed polite but not very helpful. Shortly after this experience, I also read a forum post where Linch, a junior grantmaker at the time, argued that it’s “rarely worth your time to give detailed feedback.” The argument was:
[F]rom a grantmaking perspective, detailed feedback is rarely worthwhile, especially to rejected applicants. The basic argument goes like this: it’s very hard to accurately change someone’s plans based on quick feedback (and it’s also quite easy to do harm if people overupdate on your takes too fast just because you’re a source of funding). Often, to change someone’s plans enough, it requires careful attention and understanding, multiple followup calls, etc. And this time investment is rarely enough for you to change a rejected (or even marginal) grant to a future top grant. Meanwhile, the opportunity cost is again massive.
Similarly, giving useful feedback to accepted grants can often be valuable, but it just isn’t high impact enough compared to a) making more grants, b) making grants more quickly, and c) soliciting creative ways to get more highest-impact grants out.
Since then I have heard many others complain about the lack of feedback when applying for grants in the EA space. My specific experience was with the EAIF, but based on what I’ve heard I have the feeling this problem might be endemic in the EA grantmaking culture in general.
The case for more feedback
Linch’s argument that “the opportunity cost of giving detailed feedback is massive” is only valid if by “detailed feedback” he means something really time consuming. However, it cannot be used to justify EAIF’s current policy of giving no feedback at all by default, and giving literally a one-sentence piece of feedback upon request. Using this argument to justify something so extreme would be an example of what some might call “act utilitarianism”, “naive utilitarianism”, or “single-level” utilitarianism: it may seem that, in certain cases, giving feedback is a waste of resources compared to other counterfactual actions. If you only consider first-order consequences, however, killing a healthy checkup patient and using his organs to save five is also effective. In reality, we need to also consider higher order consequences. Is it healthy for a movement to adopt a policy of not giving feedback to grant applicants?
Personally, I feel such a policy runs the risk of seeming disrespectful towards grant applicants who spend time and energy planning projects that end up never being implemented. This is not to say that the discomfort of disappointed applicants counts more than the suffering of Malaria infected children. But we are human and there is a limit to how much we can change via emotional resilience workshops. Besides, there is such a thing as too much resilience. I have talked to other EAs who applied for funds, 1:1 advice from 80k, etc, and many of them felt frustrated and somewhat disrespected after being rejected multiple times with no feedback or explanation. I find this particularly worrisome in the case of founders of national groups, since our experience may influence the development of the local movement. There is a paragraph from an article by The Economist which I think adds to my point:
As the community has expanded, it has also become more exclusive. Conferences, seminars and even picnics held by the Centre for Effective Altruism are application-only. Simon Jenkins was an early member of the community and founded an effective-altruism group in Birmingham in Britain. He has since drifted somewhat away from the movement, after years of failing to get a job at its related institutions. It has become both more “rigorously controlled”, he said, and more explicitly elitist. During an event at a Birmingham pub he once heard someone announce that “any Oxbridge grad can get involved”. “I was like, hold on a sec, is that the standard?”
Of course such events can be interpreted in many ways, but the point here is that EA has a reputation for harboring certain problematic attitudes, and that harms the movement. Giving feedback that is longer than one line can be a good step in the direction of correcting that.
An argument from virtue ethics
I’m a typical male software developer who scores highish on autistic traits (33⁄50). I can relate to the hyper-systematizing way of thinking that is dominant in EA circles. In fact, this is one of the things that attracted me to EA. However, even I have started to see how this way of thinking about ethics can be problematic or extreme in certain cases.
In an article titled “Effective altruism is logical, but too unnatural to catch on”, psychology professor Alan Jern argues that, if you’re an EA escaping from a burning building and you get to save either a child or a Picasso worth millions of dollars, you should save the Picasso because then you can sell it and donate the proceeds to effective charities that will save many children. When I first read the article, I thought this scenario was a strawman, a naive interpretation of what EAs actually believe. In 2022, however, I attended a Giving What We Can meetup, organized after EAG London, and had this exact discussion with a couple of people. I was surprised to find out that many EAs actually agreed that the right thing to do was to save the Picasso.
Personally, I’d save the child rather than the Picasso, and I don’t think this is necessarily a violation of EA principles. EA is right when it points out that much of the charity done in the world is based on emotion, but I don’t think EA should promote the complete elimination of emotion from moral decision making. EA should not be seen as a project that replaces emotions with a hyper-rational approach. Aristotle said that virtue is the sweet spot between two vices. I believe that, as much as being overly emotional is a vice, so is being overly robotic in our moral calculations. As Joshua Greene argues in Moral Tribes:
If what utilitarianism asks of you seems absurd, then it’s not what utilitarianism actually asks of you. Utilitarianism is, once again, an inherently practical philosophy, and there’s nothing more impractical than commanding free people to do things that strike them as absurd and that run counter to their most basic motivations. Thus, in the real world, utilitarianism is demanding, but not overly demanding. It can accommodate our basic human needs and motivations, but it nonetheless calls for substantial reform of our selfish habits.
In The Life You Can Save, Peter Singer similarly argues that:
Asking people to give more than almost anyone else gives risks turning them off. It might cause some to question the point of striving to live an ethical life at all. Daunted by what it takes to do the right thing, they may ask themselves why they are bothering to try. To avoid that danger, we should advocate a level of giving that will lead to the greatest possible positive response.
Of course, where to draw the line between overly emotional and overly robotic is ultimately an empirical question. As a consequentialist, I would argue that the sweet spot between the emotional and the rational is the spot that maximizes the total longterm well-being of sentient life. Unfortunately, it’s impossible to know for sure where this spot actually is. As members of EA, we can be sure, however, that if we promote an attitude that is too robotic, too cold and calculated, too mathematical and unemotional, EA will become an excessively narrow movement that attracts only a specific kind of personality. If extreme enough, there is the risk that EA views will be so shocking to the outside world that the reputation of the movement will be even more damaged than it has already been. These repercussions are the kinds of second-order consequences that multi-level utilitarianism asks us to consider when coming up with heuristic rules to guide a community.
In some ways, not giving satisfactory feedback to grant applicants is like saving a Picasso and letting the child die. It could be the best decision in a hypothetical scenario with no higher order consequences, but this decision is not the best in the real world. People need feedback.People need to know their time and effort are valued. People need to know how to improve before they apply for funds again. They need to know whether trying again is worth it or not. The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.
The case for more democracy
Of course, I may be wrong. Perhaps the sample of people I spoke to who expressed resentment is not representative. Maybe there are so many individuals and groups applying for funds that it doesn’t matter if some become frustrated and abandon the movement. Perhaps keeping the current feedback policy is actually better for the long-term well-being of sentient life. Or maybe I am right and it would be better to give more feedback. How can we know? That’s the problem with multi-level utilitarianism: it becomes speculative very fast. It’s impossible to know whether one set of rules and social norms really would be better than another. However, one solution to this epistemic conundrum is democracy. We can appeal to the wisdom of crowds and ask people to vote on which option they think would empirically turn out to be better.
In my experience, one of the aspects of EA that is generally viewed as problematic is the lack of democratic values and accountability. In the secular humanist movement, where I’ve been involved for longer than I’ve been involved in EA, democracy is an explicit value, enshrined in Humanists International’s statute. Although I appreciate the culture of asking for feedback in EA, sometimes I wonder what happens to that feedback. In the secular humanist movement, if people are frustrated with the administration, they can express their criticism at conferences or in other communication channels, and if those frustrations are not addressed, members can vote leaders out in the next elections. If EAs are frustrated with the movement’s organizational structures and decision-making processes, what can we do?
I understand that democracy has its dangers, and that sometimes we should defer to experts rather than crowds. Still, we must find a balance between oligarchy and mob rule. I think EA is erring on the side of elitism and overlooking the value democracy can have as a mechanism for error-correction, and thus progress.
Conclusion
To summarize my argument:
There have been several cases of grantmakers giving limited feedback when rejecting proposals. This lack of feedback harms the community.
If grantmakers commit to a policy of giving more feedback, this will improve community health and the effect of this change will be net positive for the movement and the world.
If we define our policies more democratically, they’re more likely to have a net positive impact because the wisdom of crowds will make our empirical assumptions more accurate.
What do you think? Do you agree that grantmakers don’t give enough feedback? Do you agree that EAs should be more suspicious of speculative arguments about the potential impact of certain policies? Do you think more democracy could improve our decision making? In what ways do you think my reasoning might be wrong? Looking forward to hearing your thoughts :)
Grantmakers should give more feedback
Background
I’ve been actively involved in EA since 2020, when I started EA Romania. In my experience, one problem that frustrates many grant applicants is the limited feedback offered by grantmakers. In 2022, at the EAG in London, while trying to get more detailed feedback regarding my own application at the EAIF office hours, I realized that many other people had similar complaints. EAIF’s response seemed polite but not very helpful. Shortly after this experience, I also read a forum post where Linch, a junior grantmaker at the time, argued that it’s “rarely worth your time to give detailed feedback.” The argument was:
Since then I have heard many others complain about the lack of feedback when applying for grants in the EA space. My specific experience was with the EAIF, but based on what I’ve heard I have the feeling this problem might be endemic in the EA grantmaking culture in general.
The case for more feedback
Linch’s argument that “the opportunity cost of giving detailed feedback is massive” is only valid if by “detailed feedback” he means something really time consuming. However, it cannot be used to justify EAIF’s current policy of giving no feedback at all by default, and giving literally a one-sentence piece of feedback upon request. Using this argument to justify something so extreme would be an example of what some might call “act utilitarianism”, “naive utilitarianism”, or “single-level” utilitarianism: it may seem that, in certain cases, giving feedback is a waste of resources compared to other counterfactual actions. If you only consider first-order consequences, however, killing a healthy checkup patient and using his organs to save five is also effective. In reality, we need to also consider higher order consequences. Is it healthy for a movement to adopt a policy of not giving feedback to grant applicants?
Personally, I feel such a policy runs the risk of seeming disrespectful towards grant applicants who spend time and energy planning projects that end up never being implemented. This is not to say that the discomfort of disappointed applicants counts more than the suffering of Malaria infected children. But we are human and there is a limit to how much we can change via emotional resilience workshops. Besides, there is such a thing as too much resilience. I have talked to other EAs who applied for funds, 1:1 advice from 80k, etc, and many of them felt frustrated and somewhat disrespected after being rejected multiple times with no feedback or explanation. I find this particularly worrisome in the case of founders of national groups, since our experience may influence the development of the local movement. There is a paragraph from an article by The Economist which I think adds to my point:
Of course such events can be interpreted in many ways, but the point here is that EA has a reputation for harboring certain problematic attitudes, and that harms the movement. Giving feedback that is longer than one line can be a good step in the direction of correcting that.
An argument from virtue ethics
I’m a typical male software developer who scores highish on autistic traits (33⁄50). I can relate to the hyper-systematizing way of thinking that is dominant in EA circles. In fact, this is one of the things that attracted me to EA. However, even I have started to see how this way of thinking about ethics can be problematic or extreme in certain cases.
In an article titled “Effective altruism is logical, but too unnatural to catch on”, psychology professor Alan Jern argues that, if you’re an EA escaping from a burning building and you get to save either a child or a Picasso worth millions of dollars, you should save the Picasso because then you can sell it and donate the proceeds to effective charities that will save many children. When I first read the article, I thought this scenario was a strawman, a naive interpretation of what EAs actually believe. In 2022, however, I attended a Giving What We Can meetup, organized after EAG London, and had this exact discussion with a couple of people. I was surprised to find out that many EAs actually agreed that the right thing to do was to save the Picasso.
Personally, I’d save the child rather than the Picasso, and I don’t think this is necessarily a violation of EA principles. EA is right when it points out that much of the charity done in the world is based on emotion, but I don’t think EA should promote the complete elimination of emotion from moral decision making. EA should not be seen as a project that replaces emotions with a hyper-rational approach. Aristotle said that virtue is the sweet spot between two vices. I believe that, as much as being overly emotional is a vice, so is being overly robotic in our moral calculations. As Joshua Greene argues in Moral Tribes:
In The Life You Can Save, Peter Singer similarly argues that:
Of course, where to draw the line between overly emotional and overly robotic is ultimately an empirical question. As a consequentialist, I would argue that the sweet spot between the emotional and the rational is the spot that maximizes the total longterm well-being of sentient life. Unfortunately, it’s impossible to know for sure where this spot actually is. As members of EA, we can be sure, however, that if we promote an attitude that is too robotic, too cold and calculated, too mathematical and unemotional, EA will become an excessively narrow movement that attracts only a specific kind of personality. If extreme enough, there is the risk that EA views will be so shocking to the outside world that the reputation of the movement will be even more damaged than it has already been. These repercussions are the kinds of second-order consequences that multi-level utilitarianism asks us to consider when coming up with heuristic rules to guide a community.
In some ways, not giving satisfactory feedback to grant applicants is like saving a Picasso and letting the child die. It could be the best decision in a hypothetical scenario with no higher order consequences, but this decision is not the best in the real world. People need feedback. People need to know their time and effort are valued. People need to know how to improve before they apply for funds again. They need to know whether trying again is worth it or not. The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.
The case for more democracy
Of course, I may be wrong. Perhaps the sample of people I spoke to who expressed resentment is not representative. Maybe there are so many individuals and groups applying for funds that it doesn’t matter if some become frustrated and abandon the movement. Perhaps keeping the current feedback policy is actually better for the long-term well-being of sentient life. Or maybe I am right and it would be better to give more feedback. How can we know? That’s the problem with multi-level utilitarianism: it becomes speculative very fast. It’s impossible to know whether one set of rules and social norms really would be better than another. However, one solution to this epistemic conundrum is democracy. We can appeal to the wisdom of crowds and ask people to vote on which option they think would empirically turn out to be better.
In my experience, one of the aspects of EA that is generally viewed as problematic is the lack of democratic values and accountability. In the secular humanist movement, where I’ve been involved for longer than I’ve been involved in EA, democracy is an explicit value, enshrined in Humanists International’s statute. Although I appreciate the culture of asking for feedback in EA, sometimes I wonder what happens to that feedback. In the secular humanist movement, if people are frustrated with the administration, they can express their criticism at conferences or in other communication channels, and if those frustrations are not addressed, members can vote leaders out in the next elections. If EAs are frustrated with the movement’s organizational structures and decision-making processes, what can we do?
I understand that democracy has its dangers, and that sometimes we should defer to experts rather than crowds. Still, we must find a balance between oligarchy and mob rule. I think EA is erring on the side of elitism and overlooking the value democracy can have as a mechanism for error-correction, and thus progress.
Conclusion
To summarize my argument:
There have been several cases of grantmakers giving limited feedback when rejecting proposals. This lack of feedback harms the community.
If grantmakers commit to a policy of giving more feedback, this will improve community health and the effect of this change will be net positive for the movement and the world.
If we define our policies more democratically, they’re more likely to have a net positive impact because the wisdom of crowds will make our empirical assumptions more accurate.
What do you think? Do you agree that grantmakers don’t give enough feedback? Do you agree that EAs should be more suspicious of speculative arguments about the potential impact of certain policies? Do you think more democracy could improve our decision making? In what ways do you think my reasoning might be wrong? Looking forward to hearing your thoughts :)