I think it much harder to give open feedback if it is closely tied with funding. Feedback from funders can easily have too much influence on people, and should be very careful and nuanced, as it comes from some position of power. I would expect adding financial incentives can easily be detrimental for the process. (For self-referential example, just look on this discussion: do you think the fact that Oli dislikes my proposal and suggest LTF can back something different with $20k will not create at least some unconscious incentives?)
I’m a bit confused here. I think I disagree with you, but maybe I am not understanding you correctly.
I consider having people giving feedback to have ‘skin in the game’ to be important for the accuracy of the feedback. Most people don’t enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.
Funders instead can give you a strong signal—a signal which is unfortunately somewhat binary and lacks nuance. But someone being willing to fund something or not is a much stronger signal for the value of a proposal than comments from friends on a GoogleDoc. This is especially true if people proposing ideas don’t take into account how hard it is to discourage people and don’t interpret feedback in that light.
I consider having people giving feedback to have ‘skin in the game’ to be important for the accuracy of the feedback. Most people don’t enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.
Maybe anonymity would be helpful here, the same way scientists do anonymous peer review?
I’m not sure if we agree or disagree, possibly we partially agree, partially disagree. In case of negative feedback, I think as a funder, you are in greater risk of people over-updating in the direction “I should stop trying”.
I agree friends and social neighbourhood may be too positive (that’s why the proposed initial reviews are anonymous, and one of the reviewers is supposed to be negative).
When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.
(I still feel like I don’t really understand where you’re coming from.)
I am concerned that your model of how idea proposals get evaluated (and then plausibly funded) is a bit off. From the original post:
hard to evaluate which project ideas are excellent , which are probably good, and which are too risky for their estimated return.
You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.
Which projects have a chance of working and which don’t is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above). It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don’t think this is the case.
When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.
I am confused by this. Knowledge of what?
The role of funders/evaluators is to evaluate projects (and maybe propose some for others to do). To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past, what good and bad signs are, ideally from an explicit feedback loop from funding projects and then seeing how the projects turn out. The role of grant applicants is to come up with some ideas they could execute. Do you disagree with this?
You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.
I agree that’s likely. Please take the first paragraphs more as motivation than precise description of the categories.
Which projects have a chance of working and which don’t is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above).
I think we are comparing apples and oranges. As far as the output should be some publicly understandable reasoning behind the judgement, I don’t think this is doable in 15m.
It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don’t think this is the case.
I don’t have strong prior on that.
To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past,...
From a project-management perspective, yes, but with slow and bad feedback loops in long-term, x-risk and meta oriented projects, I don’t think it is easy to tell what works and what does not. (Even with projects working in the sense they run smoothly and are producing some visible output.)
I’m a bit confused here. I think I disagree with you, but maybe I am not understanding you correctly.
I consider having people giving feedback to have ‘skin in the game’ to be important for the accuracy of the feedback. Most people don’t enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.
Funders instead can give you a strong signal—a signal which is unfortunately somewhat binary and lacks nuance. But someone being willing to fund something or not is a much stronger signal for the value of a proposal than comments from friends on a GoogleDoc. This is especially true if people proposing ideas don’t take into account how hard it is to discourage people and don’t interpret feedback in that light.
Maybe anonymity would be helpful here, the same way scientists do anonymous peer review?
I’m not sure if we agree or disagree, possibly we partially agree, partially disagree. In case of negative feedback, I think as a funder, you are in greater risk of people over-updating in the direction “I should stop trying”.
I agree friends and social neighbourhood may be too positive (that’s why the proposed initial reviews are anonymous, and one of the reviewers is supposed to be negative).
When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.
(I still feel like I don’t really understand where you’re coming from.)
I am concerned that your model of how idea proposals get evaluated (and then plausibly funded) is a bit off. From the original post:
You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.
Which projects have a chance of working and which don’t is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above). It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don’t think this is the case.
I am confused by this. Knowledge of what?
The role of funders/evaluators is to evaluate projects (and maybe propose some for others to do). To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past, what good and bad signs are, ideally from an explicit feedback loop from funding projects and then seeing how the projects turn out. The role of grant applicants is to come up with some ideas they could execute. Do you disagree with this?
I agree that’s likely. Please take the first paragraphs more as motivation than precise description of the categories.
I think we are comparing apples and oranges. As far as the output should be some publicly understandable reasoning behind the judgement, I don’t think this is doable in 15m.
I don’t have strong prior on that.
From a project-management perspective, yes, but with slow and bad feedback loops in long-term, x-risk and meta oriented projects, I don’t think it is easy to tell what works and what does not. (Even with projects working in the sense they run smoothly and are producing some visible output.)