However, I don’t think you’re representing that blog post accurately. You write that Givewell “stopped [soliciting external feedback] because it found that it generally wasn’t useful”, but at the top of the blog post, it says Givewell stopped because “The challenges of external evaluation are significant” and “The level of in-depth scrutiny of our work has increased greatly”. Later it says “We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny.”
I also don’t think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewell’s reports (not that many?) with the number of EAs who are familiar with various aspects of CEA’s work (lots). Since CEA’s work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA community—knowledge which CEA could try & gather in a proactive way.
Check out the “Improvements in informal evaluation” section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesn’t seem like CEA is trying as many things here as Givewell is—despite the fact that I expect external feedback would be more useful for it.
if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you’re not necessarily gaining anything by hearing more copies of each.
I would say just the opposite. If you’re hearing multiple copies of a particular narrative, especially from a range of different individuals, that’s evidence you should trust it.
If you’re worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.
Upvoted for relevant evidence.
However, I don’t think you’re representing that blog post accurately. You write that Givewell “stopped [soliciting external feedback] because it found that it generally wasn’t useful”, but at the top of the blog post, it says Givewell stopped because “The challenges of external evaluation are significant” and “The level of in-depth scrutiny of our work has increased greatly”. Later it says “We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny.”
I also don’t think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewell’s reports (not that many?) with the number of EAs who are familiar with various aspects of CEA’s work (lots). Since CEA’s work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA community—knowledge which CEA could try & gather in a proactive way.
Check out the “Improvements in informal evaluation” section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesn’t seem like CEA is trying as many things here as Givewell is—despite the fact that I expect external feedback would be more useful for it.
I would say just the opposite. If you’re hearing multiple copies of a particular narrative, especially from a range of different individuals, that’s evidence you should trust it.
If you’re worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.