GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasnât useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.
I think youâre coming at this from a point of view of âmore feedback is always betterâ, forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, youâre not necessarily gaining anything by hearing more copies of each.
Certainly you wonât gain anything for free, and you may not be able to afford the non-monetary cost.
However, I donât think youâre representing that blog post accurately. You write that Givewell âstopped [soliciting external feedback] because it found that it generally wasnât usefulâ, but at the top of the blog post, it says Givewell stopped because âThe challenges of external evaluation are significantâ and âThe level of in-depth scrutiny of our work has increased greatlyâ. Later it says âWe continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny.â
I also donât think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewellâs reports (not that many?) with the number of EAs who are familiar with various aspects of CEAâs work (lots). Since CEAâs work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA communityâknowledge which CEA could try & gather in a proactive way.
Check out the âImprovements in informal evaluationâ section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesnât seem like CEA is trying as many things here as Givewell isâdespite the fact that I expect external feedback would be more useful for it.
if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, youâre not necessarily gaining anything by hearing more copies of each.
I would say just the opposite. If youâre hearing multiple copies of a particular narrative, especially from a range of different individuals, thatâs evidence you should trust it.
If youâre worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.
GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasnât useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.
I think youâre coming at this from a point of view of âmore feedback is always betterâ, forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, youâre not necessarily gaining anything by hearing more copies of each.
Certainly you wonât gain anything for free, and you may not be able to afford the non-monetary cost.
Upvoted for relevant evidence.
However, I donât think youâre representing that blog post accurately. You write that Givewell âstopped [soliciting external feedback] because it found that it generally wasnât usefulâ, but at the top of the blog post, it says Givewell stopped because âThe challenges of external evaluation are significantâ and âThe level of in-depth scrutiny of our work has increased greatlyâ. Later it says âWe continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny.â
I also donât think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewellâs reports (not that many?) with the number of EAs who are familiar with various aspects of CEAâs work (lots). Since CEAâs work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA communityâknowledge which CEA could try & gather in a proactive way.
Check out the âImprovements in informal evaluationâ section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesnât seem like CEA is trying as many things here as Givewell isâdespite the fact that I expect external feedback would be more useful for it.
I would say just the opposite. If youâre hearing multiple copies of a particular narrative, especially from a range of different individuals, thatâs evidence you should trust it.
If youâre worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.