An interesting point of comparison might be grant award processes which do offer feedback.
InnovateUK, for example, ask bidders to answer defined questions, has up to five anonymous assessors score each project, and recommends the highest scoring projects. You get those scores and comments back whether you succeed or not, occasionally with a note an outlier score has been removed as unrepresentative. I wouldn’t call that “democratic” even though you can see what are effectively votes, but it does create the impression of sensible process and accountable assessors.
This might be more convoluted than EA grantmaking (some projects are re-scored after interview too...) but the basic idea of a scoring system against a set of criteria gives a reasonable indication of whether you were very close and may wish to resubmit to other funding rounds, need to find better impact evidence or drop it even though the basic plan is plausible or whether you should forget abut the whole thing. And that last bit absolutely is useful feedback, even if people don’t like it.
In some cases where EA orgs have very clear funding bars it might be even more concrete (project not in scope as clearly outside focus area, below $DALY threshold etc). I guess if you’re too explicit about metrics there’s a risk Goodhart’s law applies, but they can save good faith applicants a lot of time.
I get the idea of avoiding confrontation, and that the EA world is smaller than govt grants and so people might actually guess who gave them 1⁄5 for “team” and see them on social occasions, but I think there are benefits to both parties from checkbox-level feedback on what general areas are fine, need detail or need a total rethink.
An interesting point of comparison might be grant award processes which do offer feedback.
InnovateUK, for example, ask bidders to answer defined questions, has up to five anonymous assessors score each project, and recommends the highest scoring projects. You get those scores and comments back whether you succeed or not, occasionally with a note an outlier score has been removed as unrepresentative. I wouldn’t call that “democratic” even though you can see what are effectively votes, but it does create the impression of sensible process and accountable assessors.
This might be more convoluted than EA grantmaking (some projects are re-scored after interview too...) but the basic idea of a scoring system against a set of criteria gives a reasonable indication of whether you were very close and may wish to resubmit to other funding rounds, need to find better impact evidence or drop it even though the basic plan is plausible or whether you should forget abut the whole thing. And that last bit absolutely is useful feedback, even if people don’t like it.
In some cases where EA orgs have very clear funding bars it might be even more concrete (project not in scope as clearly outside focus area, below $DALY threshold etc). I guess if you’re too explicit about metrics there’s a risk Goodhart’s law applies, but they can save good faith applicants a lot of time.
I get the idea of avoiding confrontation, and that the EA world is smaller than govt grants and so people might actually guess who gave them 1⁄5 for “team” and see them on social occasions, but I think there are benefits to both parties from checkbox-level feedback on what general areas are fine, need detail or need a total rethink.