I’ve got a similar feeling to Khorton. Happy to have been pre-empted there.
It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker’s aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.
And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect ‘efficiency’ in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers ‘published’ content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don’t invest in projects that rapidly accelerate ‘x’). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient.
FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I’d be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community.
I’ve got a similar feeling to Khorton. Happy to have been pre-empted there.
It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker’s aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.
And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect ‘efficiency’ in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers ‘published’ content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don’t invest in projects that rapidly accelerate ‘x’). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient.
FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I’d be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community.