huge mistake for Future Fund to provide substantial feedback except in rare cases.
Yep, I’d imagine what makes sense is between ‘highly involved and coordinated attempt to provide feedback at scale’ and ‘zero’. I think it’s tempting to look away from how harmful ‘zero’ can be at scale
> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.
Agreed – this seems like a way to pick up easy wins and should be a good go-to for grant makers to circle back. However, banking on this as handling the concerns that were raised doesn’t account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result.
In other words, for the consequentialist-driven among us, I don’t think that community health is a nice-to-have if we’re serious about having a community of highly effective people working urgently on hard/complex things
“However, banking on this as handling the concerns that were raised doesn’t account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. ”
I mean I think people are radically underestimating the opportunity cost of doing feedback properly at the moment. If I’m right then getting feedback might reduce people’s chances of getting funded by say, 30%, or 50%, because the throughput for grants will be much reduced.
I would probably rather have a 20% chance of getting funding for my project without feedback than a 10% chance with feedback, though people’s preferences may vary.
(Alternatively all the time spent explaining and writing and corresponding will mean worse projects get funded as there’s not much time left to actually think through which projects are most impactful.)
Rob, I think you’re consistently arguing against a point few people are making. You talk about ongoing correspondence with projects, or writing (potentially paragraphs of) feedback. Several people in this thread have suggested that pre-written categories of feedback would be a huge improvement from the status quo, and I can’t see anything you’ve said that actually argues against that.
Also, as someone who semi-regularly gives feedback to 80+ people, I’ve never found it to make my thinking worse, but I’ve sometimes found it makes my thinking better.
I’m not saying there’s no cost to feedback. Of course there’s a cost! But these exaggerations are really frustrating to read, because I actually do this kind of work and the cost of what I’m proposing is a lot lower than you keep suggesting.
I’ve got a similar feeling to Khorton. Happy to have been pre-empted there.
It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker’s aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.
And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect ‘efficiency’ in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers ‘published’ content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don’t invest in projects that rapidly accelerate ‘x’). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient.
FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I’d be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community.
Okay, upon review, that was a little bit too much of a rhetorical flourish at the end. Basically, I think there’s something seriously important to consider here about how process can negatively affect community health and alignment, which I believe to be important for this community in achieving the plurality of ambitious goals we’re shooting for. I believe FTX could definitely affect in a very positive way if they wanted to
Yep, I’d imagine what makes sense is between ‘highly involved and coordinated attempt to provide feedback at scale’ and ‘zero’. I think it’s tempting to look away from how harmful ‘zero’ can be at scale
> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.
Agreed – this seems like a way to pick up easy wins and should be a good go-to for grant makers to circle back. However, banking on this as handling the concerns that were raised doesn’t account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result.
In other words, for the consequentialist-driven among us, I don’t think that community health is a nice-to-have if we’re serious about having a community of highly effective people working urgently on hard/complex things
I mean I think people are radically underestimating the opportunity cost of doing feedback properly at the moment. If I’m right then getting feedback might reduce people’s chances of getting funded by say, 30%, or 50%, because the throughput for grants will be much reduced.
I would probably rather have a 20% chance of getting funding for my project without feedback than a 10% chance with feedback, though people’s preferences may vary.
(Alternatively all the time spent explaining and writing and corresponding will mean worse projects get funded as there’s not much time left to actually think through which projects are most impactful.)
Rob, I think you’re consistently arguing against a point few people are making. You talk about ongoing correspondence with projects, or writing (potentially paragraphs of) feedback. Several people in this thread have suggested that pre-written categories of feedback would be a huge improvement from the status quo, and I can’t see anything you’ve said that actually argues against that.
Also, as someone who semi-regularly gives feedback to 80+ people, I’ve never found it to make my thinking worse, but I’ve sometimes found it makes my thinking better.
I’m not saying there’s no cost to feedback. Of course there’s a cost! But these exaggerations are really frustrating to read, because I actually do this kind of work and the cost of what I’m proposing is a lot lower than you keep suggesting.
If it’s just a form where the main reason for rejection is chosen from a list then that’s probably fine/good.
I’ve seen people try to do written feedback before and find it a nightmare so I guess people’s mileage varies a fair bit.
I’ve got a similar feeling to Khorton. Happy to have been pre-empted there.
It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker’s aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.
And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect ‘efficiency’ in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers ‘published’ content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don’t invest in projects that rapidly accelerate ‘x’). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient.
FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I’d be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community.
Okay, upon review, that was a little bit too much of a rhetorical flourish at the end. Basically, I think there’s something seriously important to consider here about how process can negatively affect community health and alignment, which I believe to be important for this community in achieving the plurality of ambitious goals we’re shooting for. I believe FTX could definitely affect in a very positive way if they wanted to