Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thank you for posting this, is it definitely nice to get a funders perspective on this!
From the other side (someone who has applied for grants and received little to no feedback on them), and having been involved in very large scale grant making through my governmental role, I fear your point (1) below is likely to be the greatest influence on grantmakers not providing feedback. Unfortunately, this I find (and did when I was a grantmaker and was prevented/unable to provide feedback) is often a cover for a lack of transparent and good reasoning practice in the grant decision process.
The vast majority of EAs are aware of reasoning transparency and good Bayesian reasoning practices. I’d hope, as I assume many members of the EA community do, that EA grant makers have a defined method to record the grantmakers judgments and what is updating their view of a grants potential impact and likelihood of success. Not least because this would allow them to identify errors and any systematic biases that grantmakers may have, and thus improve as necessary. This should therefore be easily transferable into feedback to the grantee.
The fact this isn’t done raises questions for me. Are there such systematic processes? If not, how do grantmakers have confidence in their decision making a priori? If there are such processes to record reasoning, why can’t they be summarised and provided for feedback?
The post you linked by Linch and the concern he raises that by being transparent about the reasons for not making a grant may risk applicants overupdating on the feedback seems unfounded/unevidenced. I also question how relevant given they weren’t funded anyway, so why would you be concerned they’d over update? If you don’t tell them they were a near miss and what changes may change your mind, then instead the risk is they either update randomly or the project is just completely canned—which feels worse for edge cases.
Thanks for your questions James
> This should therefore be easily transferable into feedback to the grantee.
I think this is where we disagree—this written information often isn’t in a good shape to be shared with applicants and would need significant work before sharing.
> The post you linked by Linch and the concern he raises that by being transparent about the reasons for not making a grant may risk applicants overupdating on the feedback seems unfounded/unevidenced. I also question how relevant given they weren’t funded anyway, so why would you be concerned they’d over update?
The concern here is that people can alter their plan based on the feedback with the hope it would mean that they’d have a better chance of getting the opportunity in the future. As Linch says in his post
> Often, to change someone’s plans enough, it requires careful attention and understanding, multiple followup calls, etc.
I’ve personally seen cases where it seems that feedback sends a project off in a direction that isn’t especially good. This can happen when people have different ideas of what would be reasonable steps to take in response to the feedback.
But you’re right, Linch and I don’t provide evidence for the rate of problems caused by overupdating. This is a good nudge for me to think about how problematic this is overall, and whether I’m overreacting due to a few cases.
> If you don’t tell them they were a near miss and what changes may change your mind, then instead the risk is they either update randomly or the project is just completely canned—which feels worse for edge cases.
I think it is most useful for decision makers to share feedback when a) it is a near miss, and b) the decision maker believes they can clearly describe something that the applicant can do that would make the person/project better and would likely lead to an approval.
Thank you for responding Catherine! It’s very much appreciated.
I think this is my fundamental concern. Reasoning transparency and systematic processes to record grant maker’s judgments and show how they are updating their position should be intrinsic to how they are evaluating the applications. Otherwise they can’t have much confidence in the quality of their decisions or hope to learn from what judgment errors they make when determining which grants to fund (as they have no clear way to track back why they made a grant and whether or not that was a predictor for its success/failure).
Executive summary: Decision-makers in EA should give more feedback, but there are challenges; the author provides advice for both feedback seekers and givers to improve the process.
Key points:
Decisions often involve multiple small factors, making clear feedback difficult
Feedback is most valuable from those who understand both the project and the people involved
Feedback seekers should clarify their needs and reasons for asking specific individuals
Decision-makers prefer when feedback is interpreted as personal opinion, not organizational stance
Feedback givers are discouraged when recipients reject feedback or demand full convincing
The author recommends balancing the need for decision-maker accountability with practical limitations on feedback
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.